Feed aggregator

The quality of crossmodal perception hinges on two factors: The accuracy of the independent unimodal perception and the ability to integrate information from different sensory systems. In humans, the ability for cognitively demanding crossmodal perception diminishes from young to old age. Here, we propose a new approach to research to which degree the different factors contribute to crossmodal processing and the age-related decline by replicating a medical study on visuo-tactile crossmodal pattern discrimination utilizing state-of-the-art tactile sensing technology and artificial neural networks (ANN). We implemented two ANN models to specifically focus on the relevance of early integration of sensory information during the crossmodal processing stream as a mechanism proposed for efficient processing in the human brain. Applying an adaptive staircase procedure, we approached comparable unimodal classification performance for both modalities in the human participants as well as the ANN. This allowed us to compare crossmodal performance between and within the systems, independent of the underlying unimodal processes. Our data show that unimodal classification accuracies of the tactile sensing technology are comparable to humans. For crossmodal discrimination of the ANN the integration of high-level unimodal features on earlier stages of the crossmodal processing stream shows higher accuracies compared to the late integration of independent unimodal classifications. In comparison to humans, the ANN show higher accuracies than older participants in the unimodal as well as the crossmodal condition, but lower accuracies than younger participants in the crossmodal task. Taken together, we can show that state-of-the-art tactile sensing technology is able to perform a complex tactile recognition task at levels comparable to humans. For crossmodal processing, human inspired early sensory integration seems to improve the performance of artificial neural networks. Still, younger participants seem to employ more efficient crossmodal integration mechanisms than modeled in the proposed ANN. Our work demonstrates how collaborative research in neuroscience and embodied artificial neurocognitive models can help to derive models to inform the design of future neurocomputational architectures.

Another DARPA SubT team that’s been doing it’s own in-cave practice to prepare for the final event next year is Team CoSTAR, from NASA JPL and Caltech. CoSTAR, of course, won the SubT Urban Circuit earlier this year with their team of wheeled and legged robots, which was awesome—but you’d maybe expect that for a group developing planetary exploration robots, places like urban environments and man-made tunnels wouldn’t necessarily be their top priority, right? Unless there’s something they’re not telling us, and I’m sure it’s aliens.*

NASA’s been working on robotic cave exploration for a long, long time, and Team CoSTAR (and the SubT Challenge) fit right in with that. The team and its robots have been spending some time in lava tubes, and we asked some folks from NASA JPL how it’s been going.

This interview features the following roboticists from NASA JPL:

IEEE Spectrum: What cave environments were you able to test in? What were your criteria for these places, and how did you find them?

Ali Agha: Natural caves are an important target for future NASA missions, offering potential locations of past biology, current biology, and future locations to provide protection for human habitation. In a collaboration with NASA Science Mission Directorate, we have been looking into exploring Martian-analog caves at Lava Beds National Monument in Northern California. We have a long-term relationship with these caves for NASA projects; and these caves have also been our testing location to prepare for DARPA Subterranean Challenge Cave Circuit. 

Ben Morrell: The cave segment of the Subterranean Challenge is an extremely exciting one for our team because the test locations align so well with NASA’s long term goals: To explore caves on the moon and Mars, and specifically, lava tubes, the caves formed from volcanic flows that we know are present on these other worlds. A NASA team, led by Jen Blank, was already testing robotic science exploration of lava tubes, and had selected the Lava Beds National Monument as an excellent analog for lava tubes on Mars. While great for NASA, this site also provides a rich diversity of challenges, with over 800 caves in the National Monument.

Photo: Team CoSTAR Team CoSTAR took its Spot robot to explore Martian-analog extreme terrains and lava tubes in Lava Bed National Monument, Tulelake, Calif.

What do you feel like the biggest difference was, going from an urban environment to a cave environment?

Agha: The biggest difference is on traversability. Urban underground environments have higher levels of verticality, including stairs, multi-levels, and challenging maze-like structures. Caves, on the other hands, have very harsh and extreme terrains, even difficult to traverse for humans. This puts a lot of stress on the traversability and hazard avoidance components of the autonomy solution.

Morrell: The 3D maps are infinitely more beautiful, revealing the wonder of mother nature as the robots explore the environment. The interesting shape of the caves both helps lidar-based mapping, with many distinct geometric features, as well as bringing additional challenges with more subtle and consistent vertical variations than in previous environments. 

Can you give some examples of cave-specific challenges that were surprising to you?

Agha: The lava flow terrain (aa and pahoehoe) was more extreme than what we were expecting; to the extent that our team members were not able to walk on certain parts of the terrain.

Morrell: The caves brought such a rich variety of traversability challenges that called for new ways to look at local planning that consider the whole path over a hazard. Steep “lava-fall” slopes, for instance, had paths our robots could traverse, but only if approached in the appropriate way. This is a challenge even for humans, yet we needed to look at how to program our robots to do that.

One surprise particular to lava tubes was the otherworldly friction of the surfaces. The lava flows are by far the most unforgiving and grippy surfaces we have tested on. This turned out to be a large benefit for legged robots, but caused a lot of issues for wheeled robots relying on skid-steering to turn. 

Jen Blank: The whole team was surprisingly encouraged with the facility with which the [Boston Dynamics] legged robot was able to navigate the different lava terrains—from the pahoehoe or rope-like flow texture of much of the cave floors to the aa or blocky, irregular and angular, cauliflower-sized lava. The robot was able to move across table-sized, loose tabular fragments of ceiling collapse and approach the edges of a cave where ancient flows cooled to leave a ledge or “bathtub ring” of chilled lava behind. Also positive was the robot’s ability to enter and exit the primary cave of our study—though we picked a cave with the easiest access we could find (i.e., one with a combination of natural and constructed steps and pathway). 

“The 3D maps are infinitely more beautiful, revealing the wonder of mother nature as the robots explore the environment” —Ben Morrell, NASA JPL

How did your approach change from the systems you used for the Urban Circuit?

Agha: Due to COVID-19 restrictions and challenges, we haven't had a lot of chances to work with hardware during the last several months. So our hardware solutions have not changed much. But from a software perspective, we have been improving various components of the algorithms in simulation environments, including our planning and perception methods. 

Morrell: One of the large areas of development in our team has been in our planning algorithms, with substantial theoretical and implementation upgrades. These developments were motivated from experiences in the Urban competition, with large scale environments containing many rooms, as well cave environments that could vary from narrow corridors to large open caverns. We have also focused on support for the operator, building tools to offload tasks and simplify the actions required to manage the robot team. 

What kind of experience did your robot operators have in the cave, and how was it different from Tunnel and Urban?

Morrell: Our test locations were smaller, and less complex relative to Urban, hence we adjusted testing to shorter timeframes and fewer robots. This adjustment, along with advancements in global planning, autonomy and operator assistance tools, made the operator’s experience more relaxed than in the previous circuits. This was a win for the team as a relaxed operator is the goal we are all aiming for, and we feel we are making gradual but consistent progress towards this goal. 

One aspect that was more challenging compared to Urban was recognizing potentially hazardous features in the environment. Rather than stairs, the operator had to recognize low ceilings and sudden drop-offs in an unstructured 3D map. 

Did you hold a mock Cave Circuit Competition? If so, how did it go, and what did you learn?

Morrell: We did aim to hold a mock Cave Circuit at our test site in Lava Beds National Monument. Our approach to setting up a mock competition leveraged the previous scans of these caves by the NASA science team, which we used as our ground truth map (similar to those provided by DARPA after the Tunnel and Urban Circuits). We used the map to select artifact locations, and survey marker locations. In the field, we used a total station with those survey markers, to measure the position of our portable calibration gate relative to the map. Using these gate coordinates for robot calibration, we could then run the tests like the DARPA competitions!

Lessons learned in the setup? Urban environments are wonderfully structured with convenient right angles, corners, straight corridors amenable to surveying and floor plans with which to set up ground truth artifact locations and calibration gates. You can also rely on the expectation for flat floors to sanity check final configuration results as well. We found this much more challenging in the cave environment where it is difficult to tell if you are off by a degree or two in your survey, and hard to be confident in the final result. 

The mock competition was invaluable to components that are hard to otherwise evaluate: operations under stress, artifact global localization and whether our coverage planning actually allows us to see all artifacts. We learned the value of working on reducing the operator load, how to adjust our planners to accommodate our artifact sensing, and some of the areas for improvement in our artifact localization pipeline. 

Photo: Team CoSTAR The researchers were impressed by Spot’s ability to navigate the different lava terrains.

Can you describe any particularly notable successes or failures?

Morrell: In one of our tests, we were extremely happy to have a fully autonomous run of our Spot robot, with exploration of all of the cave environment. Full autonomy is the goal we are pushing towards for NASA’s cave exploration goals, as well as for the competition, hence demonstrations towards this end are great successes for the team.

Another success was the remarkably pain-free transition of many components from sim to real, such as planners, autonomy and operations tools. This is testament to the team’s efforts in setting up powerful simulation systems and in doing monthly mini games in simulation (our own “mini-Virtual Cave Circuits”).

We found that wheeled robots struggled in the lava tube environments, with wear and tear causing some fatal hardware failures in the field that were hard to address under the restrictions with COVID. 

If you could go back and repeat the Tunnel Circuit and Urban Circuit, how do you think you would do?

Agha: Different aspects of our overall autonomy solution, referred to as NeBula, have been improved during the last several months. In particular, the traversibility and planning aspects have been improved; and we believe with our current solutions we would have been able to explore further grounds in both tunnel and urban competitions. 

Morrell: Adding to Ali’s comments, it would be very exciting to be able to try again! We believe our systems will be much more efficient in exploration, more accurate in localization, and have less downtime through enhanced autonomy and operator support. How would we go compared to the other teams? We can’t wait to find out in the final competition! We know the other teams have been making incredible progress and we are doing all we can to keep up, and look forward to testing with the other teams soon! 

How are you feeling about the combined circuit for the SubT Final?

Agha: We are really excited to see what DARPA has in mind for the finals circuit and curious to see  how three environments a tunnel cave and urban can be merged in one course.

Morrell: The final circuit really presents an unrivalled opportunity to assess just how capable our autonomous systems are. It is the perfect test, after years of research and development, to show what our team has been able to accomplish. However, there is still a long way to go and a lot to do. The ever motivating competitive element and knowledge of the continuing advances of other teams is driving us to keep pushing and improving. 

Now that you’ve been through Tunnel and Urban and your own version of Cave, do you feel like you’re approaching a generalizable solution for underground environments?

Agha: Yes our approach from the beginning was targeting a solution that can be generalized across a wide range of environment types. Field testing in different types of environments has definitely helped with identifying aspects of the solution that are not resilient to environment change and has given us the opportunity to enhance those components and come up with more general solutions that can work across different types of environments.

Morrell: Commendations to the DARPA staff designing the competition, as the variety of challenges through the large scale of Tunnel, the complexity and multi-level aspects of Urban and the extreme terrains of Cave truly drive for a generally capable solution. We know, however, that there is a greater diversity of mines, caves and urban environments than what we have tested in, and hence we will be pushing to continue testing in a variety of environments to unveil the unknown-unknowns (other than the ever-secret final competition setup), to continue to push our solution to be more robust and generalizable.

*It’s not aliens.

A fascinating challenge in the field of human–robot interaction is the possibility to endow robots with emotional intelligence in order to make the interaction more intuitive, genuine, and natural. To achieve this, a critical point is the capability of the robot to infer and interpret human emotions. Emotion recognition has been widely explored in the broader fields of human–machine interaction and affective computing. Here, we report recent advances in emotion recognition, with particular regard to the human–robot interaction context. Our aim is to review the state of the art of currently adopted emotional models, interaction modalities, and classification strategies and offer our point of view on future developments and critical issues. We focus on facial expressions, body poses and kinematics, voice, brain activity, and peripheral physiological responses, also providing a list of available datasets containing data from these modalities.

Robots that physically interact with their surroundings, in order to accomplish some tasks or assist humans in their activities, require to exploit contact forces in a safe and proficient manner. Impedance control is considered as a prominent approach in robotics to avoid large impact forces while operating in unstructured environments. In such environments, the conditions under which the interaction occurs may significantly vary during the task execution. This demands robots to be endowed with online adaptation capabilities to cope with sudden and unexpected changes in the environment. In this context, variable impedance control arises as a powerful tool to modulate the robot's behavior in response to variations in its surroundings. In this survey, we present the state-of-the-art of approaches devoted to variable impedance control from control and learning perspectives (separately and jointly). Moreover, we propose a new taxonomy for mechanical impedance based on variability, learning, and control. The objective of this survey is to put together the concepts and efforts that have been done so far in this field, and to describe advantages and disadvantages of each approach. The survey concludes with open issues in the field and an envisioned framework that may potentially solve them.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICCR 2020 – December 26-29, 2020 – [Online] HRI 2021 – March 8-11, 2021 – [Online] RoboSoft 2021 – April 12-16, 2021 – [Online]

Let us know if you have suggestions for next week, and enjoy today's videos.

Look who’s baaaack: Jibo! After being sold (twice?), this pioneering social home robot (it was first announced back in 2014!) now belongs to NTT Disruption, which was described to us as the “disruptive company of NTT Group.” We are all for disruption, so this looks like a great new home for Jibo. 

[ NTT Disruption ]

Thanks Ana!

FZI's Christmas Party was a bit of a challenge this year; good thing robots are totally competent to have a part on their own.

[ FZI ]

Thanks Arne!

Do you have a lonely dog that just wants a friend to watch cat videos on YouTube with? The Danish Technological Institute has a gift idea for you.

[ DTI ]

Thanks Samuel!

Once upon a time, not so far away, there was an elf who received a very special gift. Watch this heartwarming story. Happy Holidays from the Robotiq family to yours!

Of course, these elves are not now unemployed, they've instead moved over to toy design full time!

[ Robotiq ]

An elegant Christmas video from the Dynamics System Lab, make sure and watch through the very end for a little extra cheer.

[ Dynamic Systems Lab ]

Thanks Angela!

Usually I complain when robotics companies make holiday videos without any real robots in them, but this is pretty darn cute from Yaskawa this year.

[ Yaskawa ]

Here's our little christmas gift to the fans of strange dynamic behavior. The gyro will follow any given shape as soon as the tip touches its edge and the rotation is fast enough. The friction between tip and shape generates a tangential force, creating a moment such that the gyroscopic reaction pushes the tip towards the shape. The resulting normal force produces a moment that guides the tip along the shape's edge.

[ TUM ]

Happy Holidays from Fanuc!

Okay but why does there have to be an assembly line elf just to put in those little cranks?

[ Fanuc ]

Astrobotic's cute little CubeRover is at NASA busy not getting stuck in places.

[ Astrobotic ]

Team CoSTAR is sharing more of their work on subterranean robotic exploration.

[ CoSTAR ]

Skydio Autonomy Enterprise Foundation (AEF), a new software product that delivers advanced AI-powered capabilities to assist the pilot during tactical situational awareness scenarios and detailed industrial asset inspections. Designed for professionals, it offers an enterprise-caliber flight experience through the new Skydio Enterprise application.

[ Skydio ]

GITAI's S1 autonomous robot will conduct two experiments: IVA (Intra-Vehicular Activity) tasks such as switch and cable operations, and assembly of structures and panels to demonstrate its capability for ISA (In-Space Assembly) tasks. This video was recorded in the Nanoracks Bishop Airlock mock-up facility @GITAI Tokyo office.

[ GITAI ]

It's no Atlas, but this is some impressive dynamic balancing from iCub.

[ IIT ]

The Campaign to Stop Killer Robots and I don't agree on a lot of things, and I don't agree with a lot of the assumptions made in this video, either. But, here you go!

[ CSKR ]

I don't know much about this robot, but I love it.

[ Columbia ]

Most cable-suspended robots have a very well defined workspace, but you can increase that workspace by swinging them around. Wheee!

[ Laval ]

How you know your robot's got some skill: "to evaluate the performance in climbing over the step, we compared the R.L. result to the results of 12 students who attempted to find the best planning. The RL outperformed all the group, in terms of effort and time, both in continuous (joystick) and partition planning."

[ Zarrouk Lab ]

In the Spring 2021 semester, mechanical engineering students taking MIT class 2.007, Design and Manufacturing I, will be able to participate in the class’ iconic final robot competition from the comfort of their own home. Whether they take the class virtually or semi-virtually, students will be sent a massive kit of tools and materials to build their own unique robot along with a “Home Alone” inspired game board for the final global competition.

[ MIT ]

Well, this thing is still around!

[ Moley Robotics ]

Manuel Ahumada wrote in to share this robotic Baby Yoda that he put together with a little bit of help from Intel's OpenBot software.

[ YouTube ]

Thanks Manuel!

Here's what Zoox has been working on for the past half-decade.

[ Zoox ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICCR 2020 – December 26-29, 2020 – [Online] HRI 2021 – March 8-11, 2021 – [Online] RoboSoft 2021 – April 12-16, 2021 – [Online]

Let us know if you have suggestions for next week, and enjoy today's videos.

Look who’s baaaack: Jibo! After being sold (twice?), this pioneering social home robot (it was first announced back in 2014!) now belongs to NTT Disruption, which was described to us as the “disruptive company of NTT Group.” We are all for disruption, so this looks like a great new home for Jibo. 

[ NTT Disruption ]

Thanks Ana!

FZI's Christmas Party was a bit of a challenge this year; good thing robots are totally competent to have a part on their own.

[ FZI ]

Thanks Arne!

Do you have a lonely dog that just wants a friend to watch cat videos on YouTube with? The Danish Technological Institute has a gift idea for you.

[ DTI ]

Thanks Samuel!

Once upon a time, not so far away, there was an elf who received a very special gift. Watch this heartwarming story. Happy Holidays from the Robotiq family to yours!

Of course, these elves are not now unemployed, they've instead moved over to toy design full time!

[ Robotiq ]

An elegant Christmas video from the Dynamics System Lab, make sure and watch through the very end for a little extra cheer.

[ Dynamic Systems Lab ]

Thanks Angela!

Usually I complain when robotics companies make holiday videos without any real robots in them, but this is pretty darn cute from Yaskawa this year.

[ Yaskawa ]

Here's our little christmas gift to the fans of strange dynamic behavior. The gyro will follow any given shape as soon as the tip touches its edge and the rotation is fast enough. The friction between tip and shape generates a tangential force, creating a moment such that the gyroscopic reaction pushes the tip towards the shape. The resulting normal force produces a moment that guides the tip along the shape's edge.

[ TUM ]

Happy Holidays from Fanuc!

Okay but why does there have to be an assembly line elf just to put in those little cranks?

[ Fanuc ]

Astrobotic's cute little CubeRover is at NASA busy not getting stuck in places.

[ Astrobotic ]

Team CoSTAR is sharing more of their work on subterranean robotic exploration.

[ CoSTAR ]

Skydio Autonomy Enterprise Foundation (AEF), a new software product that delivers advanced AI-powered capabilities to assist the pilot during tactical situational awareness scenarios and detailed industrial asset inspections. Designed for professionals, it offers an enterprise-caliber flight experience through the new Skydio Enterprise application.

[ Skydio ]

GITAI's S1 autonomous robot will conduct two experiments: IVA (Intra-Vehicular Activity) tasks such as switch and cable operations, and assembly of structures and panels to demonstrate its capability for ISA (In-Space Assembly) tasks. This video was recorded in the Nanoracks Bishop Airlock mock-up facility @GITAI Tokyo office.

[ GITAI ]

It's no Atlas, but this is some impressive dynamic balancing from iCub.

[ IIT ]

The Campaign to Stop Killer Robots and I don't agree on a lot of things, and I don't agree with a lot of the assumptions made in this video, either. But, here you go!

[ CSKR ]

I don't know much about this robot, but I love it.

[ Columbia ]

Most cable-suspended robots have a very well defined workspace, but you can increase that workspace by swinging them around. Wheee!

[ Laval ]

How you know your robot's got some skill: "to evaluate the performance in climbing over the step, we compared the R.L. result to the results of 12 students who attempted to find the best planning. The RL outperformed all the group, in terms of effort and time, both in continuous (joystick) and partition planning."

[ Zarrouk Lab ]

In the Spring 2021 semester, mechanical engineering students taking MIT class 2.007, Design and Manufacturing I, will be able to participate in the class’ iconic final robot competition from the comfort of their own home. Whether they take the class virtually or semi-virtually, students will be sent a massive kit of tools and materials to build their own unique robot along with a “Home Alone” inspired game board for the final global competition.

[ MIT ]

Well, this thing is still around!

[ Moley Robotics ]

Manuel Ahumada wrote in to share this robotic Baby Yoda that he put together with a little bit of help from Intel's OpenBot software.

[ YouTube ]

Thanks Manuel!

Here's what Zoox has been working on for the past half-decade.

[ Zoox ]

Current robot designs often reflect an anthropomorphic approach, apparently aiming to convince users through an ideal system, being most similar or even on par with humans. The present paper challenges human-likeness as a design goal and questions whether simulating human appearance and performance adequately fits into how humans think about robots in a conceptual sense, i.e., human's mental models of robots and their self. Independent of the technical possibilities and limitations, our paper explores robots' attributed potential to become human-like by means of a thought experiment. Four hundred eighty-one participants were confronted with fictional transitions from human-to-robot and robot-to-human, consisting of 20 subsequent steps. In each step, one part or area of the human (e.g., brain, legs) was replaced with robotic parts providing equal functionalities and vice versa. After each step, the participants rated the remaining humanness and remaining self of the depicted entity on a scale from 0 to 100%. It showed that the starting category (e.g., human, robot) serves as an anchor for all former judgments and can hardly be overcome. Even if all body parts had been exchanged, a former robot was not perceived as totally human-like and a former human not as totally robot-like. Moreover, humanness appeared as a more sensible and easier denied attribute than robotness, i.e., after the objectively same transition and exchange of the same parts, the former human was attributed less humanness and self left compared to the former robot's robotness and self left. The participants' qualitative statements about why the robot has not become human-like, often concerned the (unnatural) process of production, or simply argued that no matter how many parts are exchanged, the individual keeps its original entity. Based on such findings, we suggest that instead of designing most human-like robots in order to reach acceptance, it might be more promising to understand robots as an own “species” and underline their specific characteristics and benefits. Limitations of the present study and implications for future HRI research and practice are discussed.

Last week’s announcement that Hyundai acquired Boston Dynamics from SoftBank left us with a lot of questions. We attempted to answer many of those questions ourselves, which is typically bad practice, but sometimes it’s the only option when news like that breaks.

Fortunately, yesterday we were able to speak with Michael Patrick Perry, vice president of business development at Boston Dynamics, who candidly answered our questions about Boston Dynamics’ new relationship with Hyundai and what the near future has in store.

IEEE Spectrum: Boston Dynamics is worth 1.1 billion dollars! Can you put that valuation into context for us?

Michael Patrick Perry: Since 2018, we’ve shifted to becoming a commercial organization. And that’s included a number of things, like taking our existing technology and bringing it to market for the first time. We’ve gone from zero to 400 Spot robots deployed, building out an ecosystem of software developers, sensor providers, and integrators. With that scale of deployment and looking at the pipeline of opportunities that we have lined up over the next year, I think people have started to believe that this isn’t just a one-off novelty—that there’s actual value that Spot is able to create. Secondly, with some of our efforts in the logistics market, we’re getting really strong signals both with our Pick product and also with some early discussions around Handle’s deployment in warehouses, which we think are going to be transformational for that industry. 

So, the thing that’s really exciting is that two years ago, we were talking about this vision, and people said, “Wow, that sounds really cool, let’s see how you do.” And now we have the validation from the market saying both that this is actually useful, and that we’re able to execute. And that’s where I think we’re starting to see belief in the long-term viability of Boston Dynamics, not just as a cutting-edge research shop, but also as a business. 

Photo: Boston Dynamics Boston Dynamics says it has deployed 400 Spot robots, building out an “ecosystem of software developers, sensor providers, and integrators.”

How would you describe Hyundai’s overall vision for the future of robotics, and how do they want Boston Dynamics to fit into that vision?

In the immediate term, Hyundai’s focus is to continue our existing trajectories, with Spot, Handle, and Atlas. They believe in the work that we’ve done so far, and we think that combining with a partner that understands many of the industries in which we’re targeting, whether its manufacturing, construction, or logistics, can help us improve our products. And obviously as we start thinking about producing these robots at scale, Hyundai’s expertise in manufacturing is going to be really helpful for us. 

Looking down the line, both Boston Dynamics and Hyundai believe in the value of smart mobility, and they’ve made a number of plays in that space. Whether it’s urban air mobility or autonomous driving, they’ve been really thinking about connecting the digital and the physical world through moving systems, whether that’s a car, a vertical takeoff and landing multi-rotor vehicle, or a robot. We are well positioned to take on robotics side of that while also connecting to some of these other autonomous services.

Can you tell us anything about the kind of robotics that the Hyundai Motor Group has going on right now?

So they’re working on a lot of really interesting stuff—exactly how that connects, you know, it’s early days, and we don’t have anything explicitly to share. But they’ve got a smart and talented robotics team that’s working in a variety of directions that  shares overlap with us. Obviously, a lot of things related to autonomous driving shares some DNA with the work that we’re doing in autonomy for Spot and Handle, so it’s pretty exciting to see.

What are you most excited about here? How do you think this deal will benefit Boston Dynamics?

I think there are a number of things. One is that they have an expertise in hardware, in a way that’s unique. They understand and appreciate the complexity of creating large complex robotic systems. So I think there’s some shared understanding of what it takes to create a great hardware product. And then also they have the resources to help us actually build those products with them together—they have manufacturing resources and things like that.

“Robotics isn’t a short term game. We’ve scaled pretty rapidly but if you start looking at what the full potential of a company like Boston Dynamics is, it’s going to take years to realize, and I think Hyundai is committed to that long-term vision”

Another thing that’s exciting is that Hyundai has some pretty visionary bets for autonomous driving and unmanned aerial systems, and all of that fits very neatly into the connected vision of robotics that we were talking about before. Robotics isn’t a short term game. We’ve scaled pretty rapidly for a robotics company in terms of the scale of robots we’ve able to deploy in the field, but if you start looking at what the full potential of a company like Boston Dynamics is, it’s going to take years to realize, and I think Hyundai is committed to that long-term vision.

And when you’ve been talking with Hyundai, what are they most excited about?

I think they’re really excited about our existing products and our technology. Looking at some of the things that Spot, Pick, and Handle are able to do now, there are applications that many of Hyundai’s customers could benefit from in terms of mobility, remote sensing, and material handling. Looking down the line, Hyundai is also very interested in smart city technology, and mobile robotics is going to be a core piece of that.

We tend to focus on Spot and Handle and Atlas in terms of platform capabilities, but can you talk a bit about some of the component-level technology that’s unique to Boston Dynamics, and that could be of interest to Hyundai?

Creating very power-dense actuator design is something that we’ve been successful at for several years, starting back with BigDog and LS3. And Handle has some hydraulic actuators and valves that are pretty unique in terms of their design and capability. Fundamentally, we have a systems engineering approach that brings together both hardware and software internally. You’ll often see different groups that specialize in something, like great mechanical or electrical engineering groups, or great controls teams, but what I think makes Boston Dynamics so special is that we’re able to put everything on the table at once to create a system that’s incredibly capable. And that’s why with something like Spot, we’re able to produce it at scale, while also making it flexible enough for all the different applications that the robot is being used for right now.

It’s hard to talk specifics right now, but there are obviously other disciplines within mechanical engineering or electrical engineering or controls for robots or autonomous systems where some of our technology could be applied.

Photo: Boston Dynamics Boston Dynamics is in the process of commercializing Handle, iterating on its design and planning to get box-moving robots on-site with customers in the next year or two.

While Boston Dynamics was part of Google, and then SoftBank, it seems like there’s been an effort to maintain independence. Is it going to be different with Hyundai? Will there be more direct integration or collaboration?

Obviously it’s early days, but right now, we have support to continue executing against all the plans that we have. That includes all the commercialization of Spot, as well as things for Atlas, which is really going to be pushing the capability of our team to expand into new areas. That’s going to be our immediate focus, and we don’t see anything that’s going to pull us away from that core focus in the near term. 

As it stands right now, Boston Dynamics will continue to be Boston Dynamics under this new ownership.

How much of what you do at Boston Dynamics right now would you characterize as fundamental robotics research, and how much is commercialization? And how do you see that changing over the next couple of years?

We have been expanding our commercial team, but we certainly keep a lot of the core capabilities of fundamental robotics research. Some of it is very visible, like the new behavior development for Atlas where we’re pushing the limits of perception and path planning. But a lot of the stuff that we’re working on is a little bit under the hood, things that are less obvious—terrain handling, intervention handling, how to make safe faults, for example. Initially when Spot started slipping on things, it would flail around trying to get back up. We’ve had to figure out the right balance between the robot struggling to stand, and when it should decide to just lock its limbs and fall over because it’s safer to do that.

I’d say the other big thrust for us is manipulation. Our gripper for Spot is coming out early next year, and that’s going to unlock a new set of capabilities for us. We have years and years of locomotion experience, but the ability to manipulate is a space that’s still relatively new to us. So we’ve been ramping up a lot of work over the last several years trying to get to an early but still valuable iteration of the technology, and we’ll continue pushing on that as we start learning what’s most useful to our customers.

“I’d say the other big thrust for us is manipulation. Our gripper for Spot is coming out early next year, and that’s going to unlock a new set of capabilities for us. We have years and years of locomotion experience, but the ability to manipulate is a space that’s still relatively new to us”

Looking back, Spot as a commercial robot has a history that goes back to robots like LS3 and BigDog, which were very ambitious projects funded by agencies like DARPA without much in the way of commercial expectations. Do you think these very early stage, very expensive, very technical projects are still things that Boston Dynamics can take on?

Yes—I would point to a lot of the things we do with Atlas as an example of that. While we don’t have immediate plans to commercialize Atlas, we can point to technologies that come out of Atlas that have enabled some of our commercial efforts over time. There’s not necessarily a clear roadmap of how every piece of Atlas research is going to feed over into a commercial product; it’s more like, this is a really hard fundamental robotics challenge, so let’s tackle it and learn things that we can then benefit from across the company. 

And fundamentally, our team loves doing cool stuff with robots, and you’ll continue seeing that in the months to come.

Photo: Boston Dynamics Spot’s arm with gripper is coming out early next year, and Boston Dynamics says that’s going to “unlock a new set of capabilities for us.”

What would it take to commercialize Atlas? And are you getting closer with Handle?

We’re in the process of commercializing Handle. We’re at a relatively early stage, but we have a plan to get the first versions for box moving on-site with customers in the next year or two. Last year, we did some on-site deployments as proof-of-concept trials, and using the feedback from that, we did a new design pass on the robot, and we’re looking at increasing our manufacturing capability. That’s all in progress.

For Atlas, it’s like the Formula 1 of robots—you’re not going to take a Formula 1 car and try to make it less capable so that you can drive it on the road. We’re still trying to see what are some applications that would necessitate an energy and computationally intensive humanoid robot as opposed to something that’s more inherently stable. Trying to understand that application space is something that we’re interested in, and then down the line, we could look at creating new morphologies to help address specific applications. In many ways, Handle is the first version of that, where we said, “Atlas is good at moving boxes but it’s very complicated and expensive, so let’s create a simpler and smaller design that can achieve some of the same things.”

The press release mentioned a mobile robot for warehouses that will be introduced next year—is that Handle?

Yes, that’s the work that we’re doing on Handle.

As we start thinking about a whole robotic solution for the warehouse, we have to look beyond a high power, low footprint, dynamic platform like Handle and also consider things that are a little less exciting on video. We need a vision system that can look at a messy stack of boxes and figure out how to pick them up, we need an interface between a robot and an order building system—things where people might question why Boston Dynamics is focusing on them because it doesn’t fit in with our crazy backflipping robots, but it’s really incumbent on us to create that full end-to-end solution.

Are you confident that under Hyundai’s ownership, Boston Dynamics will be able to continue taking the risks required to remain on the cutting edge of robotics?

I think we will continue to push the envelope of what robots are capable of, and I think in the near term, you’ll be able to see that realized in our products and the research that we’re pushing forward with. 2021 is going to be a great year for us.

Last week’s announcement that Hyundai acquired Boston Dynamics from SoftBank left us with a lot of questions. We attempted to answer many of those questions ourselves, which is typically bad practice, but sometimes it’s the only option when news like that breaks.

Fortunately, yesterday we were able to speak with Michael Patrick Perry, vice president of business development at Boston Dynamics, who candidly answered our questions about Boston Dynamics’ new relationship with Hyundai and what the near future has in store.

IEEE Spectrum: Boston Dynamics is worth 1.1 billion dollars! Can you put that valuation into context for us?

Michael Patrick Perry: Since 2018, we’ve shifted to becoming a commercial organization. And that’s included a number of things, like taking our existing technology and bringing it to market for the first time. We’ve gone from zero to 400 Spot robots deployed, building out an ecosystem of software developers, sensor providers, and integrators. With that scale of deployment and looking at the pipeline of opportunities that we have lined up over the next year, I think people have started to believe that this isn’t just a one-off novelty—that there’s actual value that Spot is able to create. Secondly, with some of our efforts in the logistics market, we’re getting really strong signals both with our Pick product and also with some early discussions around Handle’s deployment in warehouses, which we think are going to be transformational for that industry. 

So, the thing that’s really exciting is that two years ago, we were talking about this vision, and people said, “Wow, that sounds really cool, let’s see how you do.” And now we have the validation from the market saying both that this is actually useful, and that we’re able to execute. And that’s where I think we’re starting to see belief in the long-term viability of Boston Dynamics, not just as a cutting-edge research shop, but also as a business. 

Photo: Boston Dynamics Boston Dynamics says it has deployed 400 Spot robots, building out an “ecosystem of software developers, sensor providers, and integrators.”

How would you describe Hyundai’s overall vision for the future of robotics, and how do they want Boston Dynamics to fit into that vision?

In the immediate term, Hyundai’s focus is to continue our existing trajectories, with Spot, Handle, and Atlas. They believe in the work that we’ve done so far, and we think that combining with a partner that understands many of the industries in which we’re targeting, whether its manufacturing, construction, or logistics, can help us improve our products. And obviously as we start thinking about producing these robots at scale, Hyundai’s expertise in manufacturing is going to be really helpful for us. 

Looking down the line, both Boston Dynamics and Hyundai believe in the value of smart mobility, and they’ve made a number of plays in that space. Whether it’s urban air mobility or autonomous driving, they’ve been really thinking about connecting the digital and the physical world through moving systems, whether that’s a car, a vertical takeoff and landing multi-rotor vehicle, or a robot. We are well positioned to take on robotics side of that while also connecting to some of these other autonomous services.

Can you tell us anything about the kind of robotics that the Hyundai Motor Group has going on right now?

So they’re working on a lot of really interesting stuff—exactly how that connects, you know, it’s early days, and we don’t have anything explicitly to share. But they’ve got a smart and talented robotics team that’s working in a variety of directions that  shares overlap with us. Obviously, a lot of things related to autonomous driving shares some DNA with the work that we’re doing in autonomy for Spot and Handle, so it’s pretty exciting to see.

What are you most excited about here? How do you think this deal will benefit Boston Dynamics?

I think there are a number of things. One is that they have an expertise in hardware, in a way that’s unique. They understand and appreciate the complexity of creating large complex robotic systems. So I think there’s some shared understanding of what it takes to create a great hardware product. And then also they have the resources to help us actually build those products with them together—they have manufacturing resources and things like that.

“Robotics isn’t a short term game. We’ve scaled pretty rapidly but if you start looking at what the full potential of a company like Boston Dynamics is, it’s going to take years to realize, and I think Hyundai is committed to that long-term vision”

Another thing that’s exciting is that Hyundai has some pretty visionary bets for autonomous driving and unmanned aerial systems, and all of that fits very neatly into the connected vision of robotics that we were talking about before. Robotics isn’t a short term game. We’ve scaled pretty rapidly for a robotics company in terms of the scale of robots we’ve able to deploy in the field, but if you start looking at what the full potential of a company like Boston Dynamics is, it’s going to take years to realize, and I think Hyundai is committed to that long-term vision.

And when you’ve been talking with Hyundai, what are they most excited about?

I think they’re really excited about our existing products and our technology. Looking at some of the things that Spot, Pick, and Handle are able to do now, there are applications that many of Hyundai’s customers could benefit from in terms of mobility, remote sensing, and material handling. Looking down the line, Hyundai is also very interested in smart city technology, and mobile robotics is going to be a core piece of that.

We tend to focus on Spot and Handle and Atlas in terms of platform capabilities, but can you talk a bit about some of the component-level technology that’s unique to Boston Dynamics, and that could be of interest to Hyundai?

Creating very power-dense actuator design is something that we’ve been successful at for several years, starting back with BigDog and LS3. And Handle has some hydraulic actuators and valves that are pretty unique in terms of their design and capability. Fundamentally, we have a systems engineering approach that brings together both hardware and software internally. You’ll often see different groups that specialize in something, like great mechanical or electrical engineering groups, or great controls teams, but what I think makes Boston Dynamics so special is that we’re able to put everything on the table at once to create a system that’s incredibly capable. And that’s why with something like Spot, we’re able to produce it at scale, while also making it flexible enough for all the different applications that the robot is being used for right now.

It’s hard to talk specifics right now, but there are obviously other disciplines within mechanical engineering or electrical engineering or controls for robots or autonomous systems where some of our technology could be applied.

Photo: Boston Dynamics Boston Dynamics is in the process of commercializing Handle, iterating on its design and planning to get box-moving robots on-site with customers in the next year or two.

While Boston Dynamics was part of Google, and then SoftBank, it seems like there’s been an effort to maintain independence. Is it going to be different with Hyundai? Will there be more direct integration or collaboration?

Obviously it’s early days, but right now, we have support to continue executing against all the plans that we have. That includes all the commercialization of Spot, as well as things for Atlas, which is really going to be pushing the capability of our team to expand into new areas. That’s going to be our immediate focus, and we don’t see anything that’s going to pull us away from that core focus in the near term. 

As it stands right now, Boston Dynamics will continue to be Boston Dynamics under this new ownership.

How much of what you do at Boston Dynamics right now would you characterize as fundamental robotics research, and how much is commercialization? And how do you see that changing over the next couple of years?

We have been expanding our commercial team, but we certainly keep a lot of the core capabilities of fundamental robotics research. Some of it is very visible, like the new behavior development for Atlas where we’re pushing the limits of perception and path planning. But a lot of the stuff that we’re working on is a little bit under the hood, things that are less obvious—terrain handling, intervention handling, how to make safe faults, for example. Initially when Spot started slipping on things, it would flail around trying to get back up. We’ve had to figure out the right balance between the robot struggling to stand, and when it should decide to just lock its limbs and fall over because it’s safer to do that.

I’d say the other big thrust for us is manipulation. Our gripper for Spot is coming out early next year, and that’s going to unlock a new set of capabilities for us. We have years and years of locomotion experience, but the ability to manipulate is a space that’s still relatively new to us. So we’ve been ramping up a lot of work over the last several years trying to get to an early but still valuable iteration of the technology, and we’ll continue pushing on that as we start learning what’s most useful to our customers.

“I’d say the other big thrust for us is manipulation. Our gripper for Spot is coming out early next year, and that’s going to unlock a new set of capabilities for us. We have years and years of locomotion experience, but the ability to manipulate is a space that’s still relatively new to us”

Looking back, Spot as a commercial robot has a history that goes back to robots like LS3 and BigDog, which were very ambitious projects funded by agencies like DARPA without much in the way of commercial expectations. Do you think these very early stage, very expensive, very technical projects are still things that Boston Dynamics can take on?

Yes—I would point to a lot of the things we do with Atlas as an example of that. While we don’t have immediate plans to commercialize Atlas, we can point to technologies that come out of Atlas that have enabled some of our commercial efforts over time. There’s not necessarily a clear roadmap of how every piece of Atlas research is going to feed over into a commercial product; it’s more like, this is a really hard fundamental robotics challenge, so let’s tackle it and learn things that we can then benefit from across the company. 

And fundamentally, our team loves doing cool stuff with robots, and you’ll continue seeing that in the months to come.

Photo: Boston Dynamics Spot’s arm with gripper is coming out early next year, and Boston Dynamics says that’s going to “unlock a new set of capabilities for us.”

What would it take to commercialize Atlas? And are you getting closer with Handle?

We’re in the process of commercializing Handle. We’re at a relatively early stage, but we have a plan to get the first versions for box moving on-site with customers in the next year or two. Last year, we did some on-site deployments as proof-of-concept trials, and using the feedback from that, we did a new design pass on the robot, and we’re looking at increasing our manufacturing capability. That’s all in progress.

For Atlas, it’s like the Formula 1 of robots—you’re not going to take a Formula 1 car and try to make it less capable so that you can drive it on the road. We’re still trying to see what are some applications that would necessitate an energy and computationally intensive humanoid robot as opposed to something that’s more inherently stable. Trying to understand that application space is something that we’re interested in, and then down the line, we could look at creating new morphologies to help address specific applications. In many ways, Handle is the first version of that, where we said, “Atlas is good at moving boxes but it’s very complicated and expensive, so let’s create a simpler and smaller design that can achieve some of the same things.”

The press release mentioned a mobile robot for warehouses that will be introduced next year—is that Handle?

Yes, that’s the work that we’re doing on Handle.

As we start thinking about a whole robotic solution for the warehouse, we have to look beyond a high power, low footprint, dynamic platform like Handle and also consider things that are a little less exciting on video. We need a vision system that can look at a messy stack of boxes and figure out how to pick them up, we need an interface between a robot and an order building system—things where people might question why Boston Dynamics is focusing on them because it doesn’t fit in with our crazy backflipping robots, but it’s really incumbent on us to create that full end-to-end solution.

Are you confident that under Hyundai’s ownership, Boston Dynamics will be able to continue taking the risks required to remain on the cutting edge of robotics?

I think we will continue to push the envelope of what robots are capable of, and I think in the near term, you’ll be able to see that realized in our products and the research that we’re pushing forward with. 2021 is going to be a great year for us.

The growing field of soft wearable exosuits, is gradually gaining terrain and proposing new complementary solutions in assistive technology, with several advantages in terms of portability, kinematic transparency, ergonomics, and metabolic efficiency. Those are palatable benefits that can be exploited in several applications, ranging from strength and resistance augmentation in industrial scenarios, to assistance or rehabilitation for people with motor impairments. To be effective, however, an exosuit needs to synergistically work with the human and matching specific requirements in terms of both movements kinematics and dynamics: an accurate and timely intention-detection strategy is the paramount aspect which assume a fundamental importance for acceptance and usability of such technology. We previously proposed to tackle this challenge by means of a model-based myoelectric controller, treating the exosuit as an external muscular layer in parallel to the human biomechanics and as such, controlled by the same efferent motor commands of biological muscles. However, previous studies that used classical control methods, demonstrated that the level of device's intervention and effectiveness of task completion are not linearly related: therefore, using a newly implemented EMG-driven controller, we isolated and characterized the relationship between assistance magnitude and muscular benefits, with the goal to find a range of assistance which could make the controller versatile for both dynamic and static tasks. Ten healthy participants performed the experiment resembling functional daily activities living in separate assistance conditions: without the device's active support and with different levels of intervention by the exosuit. Higher assistance levels resulted in larger reductions in the activity of the muscles augmented by the suit actuation and a good performance in motion accuracy, despite involving a decrease of the movement velocities, with respect to the no assistance condition. Moreover, increasing torque magnitude by the exosuit resulted in a significant reduction in the biological torque at the elbow joint and in a progressive effective delay in the onset of muscular fatigue. Thus, contrarily to classical force and proportional myoelectric schemes, the implementation of an opportunely tailored EMG-driven model based controller affords to naturally match user's intention detection and provide an assistance level working symbiotically with the human biomechanics.

Recently, extratheses, aka Supernumerary Robotic Limbs (SRLs), are emerging as a new trend in the field of assistive and rehabilitation devices. We proposed the SoftHand X, a system composed of an anthropomorphic soft hand extrathesis, with a gravity support boom and a control interface for the patient. In preliminary tests, the system exhibited a positive outlook toward assisting impaired people during daily life activities and fighting learned-non-use of the impaired arm. However, similar to many robot-aided therapies, the use of the system may induce side effects that can be detrimental and worsen patients' conditions. One of the most common is the onset of alternative grasping strategies and compensatory movements, which clinicians absolutely need to counter in physical therapy. Before embarking in systematic experimentation with the SoftHand X on patients, it is essential that the system is demonstrated not to lead to an increase of compensation habits. This paper provides a detailed description of the compensatory movements performed by healthy subjects using the SoftHand X. Eleven right-handed healthy subjects were involved within an experimental protocol in which kinematic data of the upper body and EMG signals of the arm were acquired. Each subject executed tasks with and without the robotic system, considering this last situation as reference of optimal behavior. A comparison between two different configurations of the robotic hand was performed to understand if this aspect may affect the compensatory movements. Results demonstrated that the use of the apparatus reduces the range of motion of the wrist, elbow and shoulder, while it increases the range of the trunk and head movements. On the other hand, EMG analysis indicated that muscle activation was very similar among all the conditions. Results obtained suggest that the system may be used as assistive device without causing an over-use of the arm joints, and opens the way to clinical trials with patients.

Children begin to develop self-awareness when they associate images and abilities with themselves. Such “construction of self” continues throughout adult life as we constantly cycle through different forms of self-awareness, seeking, to redefine ourselves. Modern technologies like screens and artificial intelligence threaten to alter our development of self-awareness, because children and adults are exposed to machines, tele-presences, and displays that increasingly become part of human identity. We use avatars, invent digital lives, and augment ourselves with digital imprints that depart from reality, making the development of self-identification adjust to digital technologies that blur the boundary between us and our devices. To empower children and adults to see themselves and artificially intelligent machines as separately aware entities, we created the persona of a salvaged supermarket security camera refurbished and enhanced with the power of computer vision to detect human faces, and project them on a large-scale 3D face sculpture. The surveillance camera system moves its head to point to human faces at times, but at other times, humans have to get its attention by moving to its vicinity, creating a dynamic where audiences attempt to see their own faces on the sculpture by gazing into the machine's eye. We found that audiences began attaining an understanding of machines that interpret our faces as separate from our identities, with their own agendas and agencies that show by the way they serendipitously interact with us. The machine-projected images of us are their own interpretation rather than our own, distancing us from our digital analogs. In the accompanying workshop, participants learn about how computer vision works by putting on disguises in order to escape from an algorithm detecting them as the same person by analyzing their faces. Participants learn that their own agency affects how machines interpret them, gaining an appreciation for the way their own identities and machines' awareness of them can be separate entities that can be manipulated for play. Together the installation and workshop empower children and adults to think beyond identification with digital technology to recognize the machine's own interpretive abilities that lie separate from human being's own self-awareness.

In order to assist after-stroke individuals to rehabilitate their movements, research centers have developed lower limbs exoskeletons and control strategies for them. Robot-assisted therapy can help not only by providing support, accuracy, and precision while performing exercises, but also by being able to adapt to different patient needs, according to their impairments. As a consequence, different control strategies have been employed and evaluated, although with limited effectiveness. This work presents a bio-inspired controller, based on the concept of motor primitives. The proposed approach was evaluated on a lower limbs exoskeleton, in which the knee joint was driven by a series elastic actuator. First, to extract the motor primitives, the user torques were estimated by means of a generalized momentum-based disturbance observer combined with an extended Kalman filter. These data were provided to the control algorithm, which, at every swing phase, assisted the subject to perform the desired movement, based on the analysis of his previous step. Tests are performed in order to evaluate the controller performance for a subject walking actively, passively, and at a combination of these two conditions. Results suggest that the robot assistance is capable of compensating the motor primitive weight deficiency when the subject exerts less torque than expected. Furthermore, though only the knee joint was actuated, the motor primitive weights with respect to the hip joint were influenced by the robot torque applied at the knee. The robot also generated torque to compensate for eventual asynchronous movements of the subject, and adapted to a change in the gait characteristics within three to four steps.

Natural motion types found in skeletal and muscular systems of vertebrate animals inspire researchers to transfer this ability into engineered motion, which is highly desired in robotic systems. Dielectric elastomer actuators (DEAs) have shown promising capabilities as artificial muscles for driving such structures, as they are soft, lightweight, and can generate large strokes. For maximum performance, dielectric elastomer membranes need to be sufficiently pre-stretched. This fact is challenging, because it is difficult to integrate pre-stretched membranes into entirely soft systems, since the stored strain energy can significantly deform soft elements. Here, we present a soft robotic structure, possessing a bioinspired skeleton integrated into a soft body element, driven by an antagonistic pair of DEA artificial muscles, that enable the robot bending. In its equilibrium state, the setup maintains optimum isotropic pre-stretch. The robot itself has a length of 60 mm and is based on a flexible silicone body, possessing embedded transverse 3D printed struts. These rigid bone-like elements lead to an anisotropic bending stiffness, which only allows bending in one plane while maintaining the DEA's necessary pre-stretch in the other planes. The bones, therefore, define the degrees of freedom and stabilize the system. The DEAs are manufactured by aerosol deposition of a carbon-silicone-composite ink onto a stretchable membrane that is heat cured. Afterwards, the actuators are bonded to the top and bottom of the silicone body. The robotic structure shows large and defined bimorph bending curvature and operates in static as well as dynamic motion. Our experiments describe the influence of membrane pre-stretch and varied stiffness of the silicone body on the static and dynamic bending displacement, resonance frequencies and blocking forces. We also present an analytical model based on the Classical Laminate Theory for the identification of the main influencing parameters. Due to the simple design and processing, our new concept of a bioinspired DEA based robotic structure, with skeletal and muscular reinforcement, offers a wide range of robotic application.

There is a substantial number of telerobotics and teleoperation applications ranging from space operations, ground/aerial robotics, drive-by-wire systems to medical interventions. Major obstacles for such applications include latency, channel corruptions, and bandwidth which limit teleoperation efficacy. This survey reviews the time delay problem in teleoperation systems. We briefly review different solutions from early approaches which consist of control-theory-based models and user interface designs and focus on newer approaches developed since 2014. Future solutions to the time delay problem will likely be hybrid solutions which include modeling of user intent, prediction of robot movements, and time delay prediction all potentially using time series prediction methods. Hence, we examine methods that are primarily based on time series prediction. Recent prediction approaches take advantage of advances in nonlinear statistical models as well as machine learning and neural network techniques. We review Recurrent Neural Networks, Long Short-Term Memory, Sequence to Sequence, and Generative Adversarial Network models and examine each of these approaches for addressing time delay. As time delay is still an unsolved problem, we suggest some possible future research directions from information-theory-based modeling, which may lead to promising new approaches to advancing the field.

Multi-function swarms are swarms that solve multiple tasks at once. For example, a quadcopter swarm could be tasked with exploring an area of interest while simultaneously functioning as ad-hoc relays. With this type of multi-function comes the challenge of handling potentially conflicting requirements simultaneously. Using the Quality-Diversity algorithm MAP-elites in combination with a suitable controller structure, a framework for automatic behavior generation in multi-function swarms is proposed. The framework is tested on a scenario with three simultaneous tasks: exploration, communication network creation and geolocation of Radio Frequency (RF) emitters. A repertoire is evolved, consisting of a wide range of controllers, or behavior primitives, with different characteristics and trade-offs in the different tasks. This repertoire enables the swarm to online transition between behaviors featuring different trade-offs of applications depending on the situational requirements. Furthermore, the effect of noise on the behavior characteristics in MAP-elites is investigated. A moderate number of re-evaluations is found to increase the robustness while keeping the computational requirements relatively low. A few selected controllers are examined, and the dynamics of transitioning between these controllers are explored. Finally, the study investigates the importance of individual sensor or controller inputs. This is done through ablation, where individual inputs are disabled and their impact on the performance of the swarm controllers is assessed and analyzed.

Replicating the human sense of touch is complicated—electronic skins need to be flexible, stretchable, and sensitive to temperature, pressure and texture; they need to be able to read biological data and provide electronic readouts. Therefore, how to power electronic skin for continuous, real-time use is a big challenge. 

To address this, researchers from Glasgow University have developed an energy-generating e-skin made out of miniaturized solar cells, without dedicated touch sensors. The solar cells not only generate their own power—and some surplus—but also provide tactile capabilities for touch and proximity sensing. An early-view paper of their findings was published in IEEE Transactions on Robotics.

When exposed to a light source, the solar cells on the s-skin generate energy. If a cell is shadowed by an approaching object, the intensity of the light, and therefore the energy generated, reduces, dropping to zero when the cell makes contact with the object, confirming touch. In proximity mode, the light intensity tells you how far the object is with respect to the cell. “In real time, you can then compare the light intensity…and after calibration find out the distances,” says Ravinder Dahiya of the Bendable Electronics and Sensing Technologies (BEST) Group, James Watt School of Engineering, University of Glasgow, where the study was carried out. The team used infra-red LEDs with the solar cells for proximity sensing for better results.

To demonstrate their concept, the researchers wrapped a generic 3D-printed robotic hand in their solar skin, which was then recorded interacting with its environment. The proof-of-concept tests showed an energy surplus of 383.3 mW from the palm of the robotic arm. “The eSkin could generate more than 100 W if present over the whole body area,” they reported in their paper.

“If you look at autonomous, battery-powered robots, putting an electronic skin [that] is consuming energy is a big problem because then it leads to reduced operational time,” says Dahiya. “On the other hand, if you have a skin which generates energy, then…it improves the operational time because you can continue to charge [during operation].” In essence, he says, they turned a challenge—how to power the large surface area of the skin—into an opportunity—by turning it into an energy-generating resource.

Dahiya envisages numerous applications for BEST’s innovative e-skin, given its material-integrated sensing capabilities, apart from the obvious use in robotics. For instance, in prosthetics: “[As] we are using [a] solar cell as a touch sensor itself…we are also [making it] less bulkier than other electronic skins.” This, he adds, will help create prosthetics that are of optimal weight and size, thus making it easier for prosthetics users. “If you look at electronic skin research, the the real action starts after it makes contact… Solar skin is a step ahead, because it will start to work when the object is approaching…[and] have more time to prepare for action.” This could effectively reduce the time lag that is often seen in brain–computer interfaces.

There are also possibilities in the automation sector, particularly in electrical and interactive vehicles. A car covered with solar e-skin, because of its proximity-sensing capabilities, would be able to “see” an approaching obstacle or a person. It isn’t “seeing” in the biological sense, Dahiya clarifies, but from the point of view of a machine. This can be integrated with other objects, not just cars, for a variety of uses. “Gestures can be recognized as well…[which] could be used for gesture-based control…in gaming or in other sectors.”

In the lab, tests were conducted with a single source of white light at 650 lux, but Dahiya feels there are interesting possibilities if they could work with multiple light sources that the e-skin could differentiate between. “We are exploring different AI techniques [for that],” he says, “processing the data in an innovative way [so] that we can identify the the directions of the light sources as well as the object.”

The BEST team’s achievement brings us closer to a flexible, self-powered, cost-effective electronic skin that can touch as well as “see.” At the moment, however, there are still some challenges. One of them is flexibility. In their prototype, they used commercial solar cells made of amorphous silicon, each 1cm x 1cm. “They are not flexible, but they are integrated on a flexible substrate,” Dahiya says. “We are currently exploring nanowire-based solar cells…[with which] we we hope to achieve good performance in terms of energy as well as sensing functionality.” Another shortcoming is what Dahiya calls “the integration challenge”—how to make the solar skin work with different materials.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ICCR 2020 – December 26-29, 2020 – [Online Conference] HRI 2021 – March 8-11, 2021 – [Online Conference] RoboSoft 2021 – April 12-16, 2021 – [Online Conference]

Let us know if you have suggestions for next week, and enjoy today's videos.

What a lovely Christmas video from Norlab.

[ Norlab ]

Thanks Francois!

MIT Mini-Cheetahs are looking for a new home. Our new cheetah cubs, born at NAVER LABS, are for the MIT Mini-Cheetah workshop. MIT professor Sangbae Kim and his research team are supporting joint research by distributing Mini-Cheetahs to researchers all around the world.

NAVER Labs ]

For several years, NVIDIA’s research teams have been working to leverage GPU technology to accelerate reinforcement learning (RL). As a result of this promising research, NVIDIA is pleased to announce a preview release of Isaac Gym – NVIDIA’s physics simulation environment for reinforcement learning research. RL-based training is now more accessible as tasks that once required thousands of CPU cores can now instead be trained using a single GPU.

[ NVIDIA ]

At SINTEF in Norway, they're working on ways of using robots to keep tabs on giant floating cages of tasty fish:

One of the tricky things about operating robots in an environment like this is localization, so SINTEF is working on a solution that uses beacons:

While that video shows a lot of simulation (because otherwise there are tons of fish in the way), we're told that the autonomous navigation has been successfully demonstrated with an ROV in "a full scale fish farm with up to 200.000 salmon swimming around the robot."

[ SINTEF ]

Thanks Eleni!

We’ve been getting ready for the snow in the most BG way possible. Wishing all of you a happy and healthy holiday season.

[ Berkshire Grey ]

ANYbotics doesn’t care what time of the year it is, so Happy Easter!

And here's a little bit about why ANYmal C looks the way it does.

[ ANYbotics ]

Robert "Buz" Chmielewski is using two modular prosthetic limbs developed by APL to feed himself dessert. Smart software puts his utensils in roughly the right spot, and then Buz uses his brain signals to cut the food with knife and fork. Once he is done cutting, the software then brings the food near his mouth, where he again uses brain signals to bring the food the last several inches to his mouth so that he can eat it.

[ JHUAPL ]

Introducing VESPER: a new military-grade small drone that is designed, sourced and built in the United States. Vesper offers a 50-minutes flight time, with speeds up to 45 mph (72 kph) and a total flight range of 25 miles (45 km). The magnetic snap-together architecture enables extremely fast transitions: the battery, props and rotor set can each be swapped in <5 seconds.

[ Vantage Robotics ]

In this video, a multi-material robot simulator is used to design a shape-changing robot, which is then transferred to physical hardware. The simulated and real robots can use shape change to switch between rolling gaits and inchworm gaits, to locomote in multiple environments.

[ Yale Faboratory ]

Get a preview of the cave environments that are being used to inspire the Final Event competition course of the DARPA Subterranean Challenge. In the Final Event, teams will deploy their robots to rapidly map, navigate, and search in competition courses that combine elements of man-made tunnel systems, urban underground, and natural cave networks!

The reason to pay attention this particular video is that it gives us some idea of what DARPA means when they say "cave."

[ SubT ]

MQ25 takes another step toward unmanned aerial refueling for the U.S. Navy. The MQ-25 test asset has flown for the first time with an aerial refueling pod containing the hose and basket that will make it an aerial refueler.

[ Boeing ]

We present a unified model-based and data-driven approach for quadrupedal planning and control to achieve dynamic locomotion over uneven terrain. We utilize on-board proprioceptive and exteroceptive feedback to map sensory information and desired base velocity commands into footstep plans using a reinforcement learning (RL) policy trained in simulation over a wide range of procedurally generated terrains.

[ DRS ]

The video shows the results of the German research project RoPHa. Within the project, the partners developed technologies for two application scenarios with the service robot Care-O-bot 4 in order to support people in need of help when eating.

[ RoPHa Project ]

Thanks Jenny!

This looks like it would be fun, if you are a crazy person.

[ Team BlackSheep ]

Robot accuracy is the limiting factor in many industrial applications. Manufacturers often only specify the pose repeatability values of their robotic systems. Fraunhofer IPA has set up a testing environment for automated measuring of accuracy performance criteria of industrial robots. Following the procedures defined in norm ISO 9283 allows generating reliable and repeatable results. They can be the basis for targeted measures increasing the robotic system’s accuracy.

[ Fraunhofer ]

Thanks Jenny!

The IEEE Women in Engineering - Robotics and Automation Society (WIE-RAS) hosted an online panel on best practices for teaching robotics. The diverse panel boasts experts in robotics education from a variety of disciplines, institutions, and areas of expertise.

[ IEEE RAS ]

Northwestern researchers have developed a first-of-its-kind soft, aquatic robot that is powered by light and rotating magnetic fields. These life-like robotic materials could someday be used as "smart" microscopic systems for production of fuels and drugs, environmental cleanup or transformative medical procedures.

[ Northwestern ]

Tech United Eindhoven's soccer robots now have eight wheels instead of four wheels, making them tweleve times better, if my math is right.

[ TU Eindhoven ]

Pages