Feed aggregator

Human-centered artificial intelligence is increasingly deployed in professional workplaces in Industry 4.0 to address various challenges related to the collaboration between the operators and the machines, the augmentation of their capabilities, or the improvement of the quality of their work and life in general. Intelligent systems and autonomous machines need to continuously recognize and follow the professional actions and gestures of the operators in order to collaborate with them and anticipate their trajectories for avoiding potential collisions and accidents. Nevertheless, the recognition of patterns of professional gestures is a very challenging task for both research and the industry. There are various types of human movements that the intelligent systems need to perceive, for example, gestural commands to machines and professional actions with or without the use of tools. Moreover, the interclass and intraclass spatiotemporal variances together with the very limited access to annotated human motion data constitute a major research challenge. In this paper, we introduce the gesture operational model, which describes how gestures are performed based on assumptions that focus on the dynamic association of body entities, their synergies, and their serial and non-serial mediations, as well as their transitioning over time from one state to another. Then, the assumptions of the gesture operational model are translated into a simultaneous equation system for each body entity through state-space modeling. The coefficients of the equation are computed using the maximum likelihood estimation method. The simulation of the model generates a confidence-bounding box for every entity that describes the tolerance of its spatial variance over time. The contribution of our approach is demonstrated for both recognizing gestures and forecasting human motion trajectories. In recognition, it is combined with continuous hidden Markov models to boost the recognition accuracy when the likelihoods are not confident. In forecasting, a motion trajectory can be estimated by taking as minimum input two observations only. The performance of the algorithm has been evaluated using four industrial datasets that contain gestures and actions from a TV assembly line: the glassblowing industry, the gestural commands to automated guided vehicles as well as the human–robot collaboration in the automotive assembly lines. The hybrid approach State-Space and HMMs outperforms standard continuous HMMs and a 3DCNN-based end-to-end deep architecture.

Research on robotic assistance devices tries to minimize the risk of falls due to misuse of non-actuated canes. This paper contributes to this research effort by presenting a novel control strategy of a robotic cane that adapts automatically to its user gait characteristics. We verified the proposed control law on a robotic cane sharing the main shape features of a non-actuated cane. It consists of a motorized telescopic shaft mounted on the top of two actuated wheels driven by the same motor. Cane control relies on two Inertial Measurement Units (IMU). One is attached to the cane and the other to the thigh of its user impaired leg. During the swing phase of this leg, the motor of the wheels is controlled to enable the tracking of the impaired leg thigh angle by the cane orientation. The wheels are immobilized during the stance phase to provide motionless mechanical support to the user. The shaft length is continuously adjusted to keep a constant height of the cane handle. The primary goal of this work is to show the feasibility of the cane motion synchronization with its user gait. The control strategy looks promising after several experiments. After further investigations and experiments with end-users, the proposed control law could pave the road toward its use in robotic canes used either as permanent assistance or during rehabilitation.

Most of us have a fairly rational expectation that if we put our cellphone down somewhere, it will stay in that place until we pick it up again. Normally, this is exactly what you’d want, but there are exceptions, like when you put your phone down in not quite the right spot on a wireless charging pad without noticing, or when you’re lying on the couch and your phone is juuust out of reach no matter how much you stretch.

Roboticists from the Biorobotics Laboratory at Seoul National University in South Korea have solved both of these problems, and many more besides, by developing a cellphone case with little robotic legs, endowing your phone with the ability to skitter around autonomously. And unlike most of the phone-robot hybrids we’ve seen in the past, this one actually does look like a legit case for your phone.

CaseCrawler is much chunkier than a form-fitting case, but it’s not offensively bigger than one of those chunky battery cases. It’s only 24 millimeters thick (excluding the motor housing), and the total weight is just under 82 grams. Keep in mind that this case is in fact an entire robot, and also not at all optimized for being an actual phone case, so it’s easy to imagine how it could get a lot more svelte—for example, it currently includes a small battery that would be unnecessary if it instead tapped into the phone for power.

The technology inside is pretty amazing, since it involves legs that can retract all the way flat while also supporting a significant amount of weight. The legs work sort of like your legs do, in that there’s a knee joint that can only bend one way. To move the robot forward, a linkage (attached to a motor through a gearbox) pushes the leg back against the ground, as the knee joint keeps the leg straight. On the return stroke, the joint allows the leg to fold, making it compliant so that it doesn’t exert force on the ground. The transmission that sends power from the gearbox to the legs is just 1.5-millimeter thick, but this incredibly thin and lightweight mechanical structure is quite powerful. A non-phone case version of the robot, weighing about 23 g, is able to crawl at 21 centimeters per second while carrying a payload of just over 300 g. That’s more than 13 times its body weight.

The researchers plan on exploring how robots like these could make other objects movable that would otherwise not be. They’d also like to add some autonomy, which (at least for the phone case version) could be as straightforward as leveraging the existing sensors on the phone. And as to when you might be able to buy one of these—we’ll keep you updated, but the good news is that it seems to be fundamentally inexpensive enough that it may actually crawl out of the lab one day.

“CaseCrawler: A Lightweight and Low-Profile Crawling Phone Case Robot,” by Jongeun Lee, Gwang-Pil Jung, Sang-Min Baek, Soo-Hwan Chae, Sojung Yim, Woongbae Kim, and Kyu-Jin Cho from Seoul National University, appears in the October issue of IEEE Robotics and Automation Letters. < Back to IEEE Journal Watch

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

AWS Cloud Robotics Summit – August 18-19, 2020 – [Online Conference] CLAWAR 2020 – August 24-26, 2020 – [Virtual Conference] ICUAS 2020 – September 1-4, 2020 – Athens, Greece ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference] IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA ICSR 2020 – November 14-16, 2020 – Golden, Co., USA

Let us know if you have suggestions for next week, and enjoy today’s videos.

It’s coming together—literally! Japan’s giant Gundam appears nearly finished and ready for its first steps. In a recent video, Gundam Factory Yokohama, which is constructing the 18-meter-tall, 25-ton walking robot, provided an update on the project. The video shows the Gundam getting its head attached—after being blessed by Shinto priests. 

In the video update, they say the project is “steadily progressing” and further details will be announced around the end of September.

[ Gundam Factory Yokohama ]

Creating robots with emotional personalities will transform the usability of robots in the real-world. As previous emotive social robots are mostly based on statically stable robots whose mobility is limited, this work develops an animation to real-world pipeline that enables dynamic bipedal robots that can twist, wiggle, and walk to behave with emotions.

So that’s where Cassie’s eyes go.

[ Berkeley ]

Now that the DARPA SubT Cave Circuit is all virtual, here’s a good reminder of how it’ll work.

[ SubT ]

Since July 20, anyone 11+ years of age must wear a mask in closed public places in France. This measure also is highly recommended in many European, African and Persian Gulf countries. To support businesses and public places, SoftBank Robotics Europe unveils a new feature with Pepper: AI Face Mask Detection.

[ Softbank ]

University of Michigan researchers are developing new origami inspired methods for designing, fabricating and actuating micro-robots using heat.These improvements will expand the mechanical capabilities of the tiny bots, allowing them to fold into more complex shapes.

[ University of Michigan ]

Suzumori Endo Lab, Tokyo Tech has created various types of IPMC robots. Those robots are fabricated by novel 3D fabrication methods.

[ Suzimori Endo Lab ]

The most explode-y of drones manages not to explode this time.

[ SpaceX ]

At Amazon, we’re constantly innovating to support our employees, customers, and communities as effectively as possible. As our fulfillment and delivery teams have been hard at work supplying customers with items during the pandemic, Amazon’s robotics team has been working behind the scenes to re-engineer bots and processes to increase safety in our fulfillment centers.

While some folks are able to do their jobs at home with just a laptop and internet connection, it’s not that simple for other employees at Amazon, including those who spend their days building and testing robots. Some engineers have turned their homes into R&D labs to continue building these new technologies to better serve our customers and employees. Their creativity and resourcefulness to keep our important programs going is inspiring.

[ Amazon ]

Australian Army soldiers from 2nd/14th Light Horse Regiment (Queensland Mounted Infantry) demonstrated the PD-100 Black Hornet Nano unmanned aircraft vehicle during a training exercise at Shoalwater Bay Training Area, Queensland, on 4 May 2018.

This robot has been around for a long time—maybe 10 years or more? It makes you wonder what the next generation will look like, and if they can manage to make it even smaller.

[ FLIR ]

Event-based cameras are bio-inspired vision sensors whose pixels work independently from each other and respond asynchronously to brightness changes, with microsecond resolution. Their advantages make it possible to tackle challenging scenarios in robotics, such as high-speed and high dynamic range scenes. We present a solution to the problem of visual odometry from the data acquired by a stereo event-based camera rig.

[ Paper ] via [ HKUST ]

Emys can help keep kindergarteners sitting still for a long time, which is not small feat! 

[ Emys ]

Introducing the RoboMaster EP Core, an advanced educational robot that was built to take learning to the next level and provides an all-in-one solution for STEAM-based classrooms everywhere, offering AI and programming projects for students of all ages and experience levels.

[ DJI ]

This Dutch food company Heemskerk uses ABB robots to automate their order picking. Their new solution reduces the amount of time the fresh produce spends in the supply chain, extending its shelf life, minimizing wastage, and creating a more sustainable solution for the fresh food industry.

[ ABB ]

This week’s episode of Pass the Torque features NASA’s Satellite Servicing Projects Division (NExIS) Robotics Engineer, Zakiya Tomlinson.

[ NASA ]

Massachusetts has been challenging Silicon Valley as the robotics capital of the United States. They’re not winning, yet. But they’re catching up.

[ MassTech ]

San Francisco-based Formant is letting anyone remotely take its Spot robot for a walk. Watch The Robot Report editors, based in Boston, take Spot for a walk around Golden Gate Park.

You can apply for this experience through Formant at the link below.

[ Formant ] via [ TRR ]

Thanks Steve!

An Institute for Advanced Study Seminar on “Theoretical Machine Learning,” featuring Peter Stone from UT Austin.

For autonomous robots to operate in the open, dynamically changing world, they will need to be able to learn a robust set of skills from relatively little experience. This talk begins by introducing Grounded Simulation Learning as a way to bridge the so-called reality gap between simulators and the real world in order to enable transfer learning from simulation to a real robot. It then introduces two new algorithms for imitation learning from observation that enable a robot to mimic demonstrated skills from state-only trajectories, without any knowledge of the actions selected by the demonstrator. Connections to theoretical advances in off-policy reinforcement learning will be highlighted throughout.

[ IAS ]

In recent years, there has been a rise in interest in the development of self-growing robotics inspired by the moving-by-growing paradigm of plants. In particular, climbing plants capitalize on their slender structures to successfully negotiate unstructured environments while employing a combination of two classes of growth-driven movements: tropic responses, growing toward or away from an external stimulus, and inherent nastic movements, such as periodic circumnutations, which promote exploration. In order to emulate these complex growth dynamics in a 3D environment, a general and rigorous mathematical framework is required. Here, we develop a general 3D model for rod-like organs adopting the Frenet-Serret frame, providing a useful framework from the standpoint of robotics control. Differential growth drives the dynamics of the organ, governed by both internal and external cues while neglecting elastic responses. We describe the numerical method required to implement this model and perform numerical simulations of a number of key scenarios, showcasing the applicability of our model. In the case of responses to external stimuli, we consider a distant stimulus (such as sunlight and gravity), a point stimulus (a point light source), and a line stimulus that emulates twining of a climbing plant around a support. We also simulate circumnutations, the response to an internal oscillatory cue, associated with search processes. Lastly, we also demonstrate the superposition of the response to an external stimulus and circumnutations. In addition, we consider a simple example illustrating the possible use of an optimal control approach in order to recover tropic dynamics in a way that may be relevant for robotics use. In all, the model presented here is general and robust, paving the way for a deeper understanding of plant response dynamics and also for novel control systems for newly developed self-growing robots.

Backed by the virtually unbounded resources of the cloud, battery-powered mobile robotics can also benefit from cloud computing, meeting the demands of even the most computationally and resource-intensive tasks. However, many existing mobile-cloud hybrid (MCH) robotic tasks are inefficient in terms of optimizing trade-offs between simultaneously conflicting objectives, such as minimizing both battery power consumption and network usage. To tackle this problem we propose a novel approach that can be used not only to instrument an MCH robotic task but also to search for its efficient configurations representing compromise solution between the objectives. We introduce a general-purpose MCH framework to measure, at runtime, how well the tasks meet these two objectives. The framework employs these efficient configurations to make decisions at runtime, which are based on: (1) changing of the environment (i.e., WiFi signal level variation), and (2) itself in a changing environment (i.e., actual observed packet loss in the network). Also, we introduce a novel search-based multi-objective optimization (MOO) algorithm, which works in two steps to search for efficient configurations of MCH applications. Analysis of our results shows that: (i) using self-adaptive and self-aware decisions, an MCH foraging task performed by a battery-powered robot can achieve better optimization in a changing environment than using static offloading or running the task only on the robot. However, a self-adaptive decision would fall behind when the change in the environment happens within the system. In such a case, a self-aware system can perform well, in terms of minimizing the two objectives. (ii) The Two-Step algorithm can search for better quality configurations for MCH robotic tasks of having a size from small to medium scale, in terms of the total number of their offloadable modules.

When designing a mobility system for a robot, the goal is usually to come up with one single system that allows your robot to do everything that you might conceivably need it to do, whether that’s walking, running, rolling, swimming, or some combination of those things. This is not at all how humans do it, though: If humans followed the robot model, we’d be walking around wearing some sort of horrific combination of sneakers, hiking boots, roller skates, skis, and flippers on our feet. Instead, we do the sensible thing, and optimize our mobility system for different situations by putting on different pairs of shoes. 

At ICRA, researchers from Georgia Tech demonstrated how this shoe swapping could be applied to robots. They haven’t just come up with a robot that can use “swappable propulsors”—as they call the robot’s shoes—but crucially, they’ve managed to get it to the swapping all by itself with a cute little robot arm.

Nifty, right? The robot’s shoes, er, propulsors, fit snugly into t-shaped slots on the wheels, and stay secure through a combination of geometric orientation and permanent magnets. This results in a fairly simple attachment system with high holding force but low detachment force as long as the manipulator jiggers the shoes in the right way. It’s all open loop for now, and it does take a while—in real time, swapping a single propulsor takes about 13 seconds.

Even though the propulsor swapping capability does require the robot to carry the propulsors themselves around, and it means that it has to carry a fairly high DoF manipulator around as well, the manipulator at least can be used for all kinds of other useful things. Many mobile robots have manipulators of one sort or another already, although they’re usually intended for world interaction rather than self-modification. With some adjustments to structure or degrees of freedom, mobile manipulators could potentially leverage swappable propulsors as well.

In case you’re wondering whether this additional complexity is all worthwhile, in the sense that a robot with permanent wheel-legs can do everything that this robot does without needing to worry about an arm or propulsor swapping, it turns out that it makes a substantial difference to efficiency. In its wheeled configuration on flat concrete, the robot had a cost of transport of 0.97, which the researchers say “represents a roughly three-fold decrease when compared to the legged results on concrete.” And of course the idea is that eventually, the robot will be able to handle a much wider variety of terrain, thanks to an on-board stockpile of different kinds of propulsors. 

Photos: Georgia Tech The robot uses a manipulator mounted on its back to retrieve the propulsors from a compartment and attach them to its wheels. 

For more details, we connected with first author Raymond Kim via email.

IEEE Spectrum: Humans change shoes to do different things all the time—why do you think this hasn’t been applied to robots before?

Raymond Kim: In our view, there are two reasons for this. First, to date, most vehicle-mounted manipulators have been primarily designed to sense and interact with the external world rather than the robot. Therefore, vehicle-mounted manipulators may not be able to access all parts of the robot or sense interactions between the arm and the vehicle body. Second, locomotion involves relatively high forces between the propulsion system and the ground. Vehicle-mounted manipulators have historically been lightweight in order to minimize size, mass, and power consumption. As a result, such manipulators cannot impose large forces. Therefore, any swappable propulsor must be both capable of bearing large locomotive loads and also easily adapted with low manipulation forces. These two requirements are often at odds with each other, which creates a challenging design problem. Our ICRA presentation had a failure video that illustrated what happens when the design is not sufficiently robust.

How much autonomy is there in the system right now?

Currently, autonomy is limited to the trajectory tracking of the manipulator during the process of changing shoes/propulsors. We initiate the change of shoe based on human command and the shoe changing operation is a scripted trajectory. For a fully autonomous version, we would need a path-planning algorithm that is able to identify terrain in order to determine when to adapt.  This could be done with onboard sensing or a pre-loaded map. 

Is this concept primarily useful for modifying rotary motors, or could it have benefits for other kinds of mobility systems as well?

We envision that this concept can be applied to a broad range of locomotion systems. While we have focused on rotary actuators because of their common use, we imagine changing the end-effector on a linear actuator in a similar manner. Also, these methods could be used to modify passive components such as adding a tail to the back of a robot, a plow to the front, or redistributing the mass of the system.

Photo: Georgia Tech Currently the robot’s propulsors are designed for rough terrain, but the researchers are exploring different shapes that can help with mobility in snow, sand, and water.

What other propulsors do you think your robot might benefit from?

We are very excited to explore a broad range of propulsors. For terrestrial locomotion, we think more tailored adaptations for snow or sand would be valuable. These may involve modifying the wheels by adding spikes or paddles. Additionally, we were originally motivated by naval operations. Navy personnel can swim to shore using flippers and then switch to boots to operate on land. This switch can dramatically improve locomotive efficiency. Imagine trying to swim in boots, or climbing stairs with flippers! We are looking forward to similar designs that switch between fins and wheels/legs for amphibious behaviors.

What are you working on next?

Our immediate focus is on improving the performance of our existing ground vehicle. We are adding sensing capability to the arm so that swapping propulsors can be performed faster and with greater robustness. In addition, we are looking to tailor motion planning algorithms with the unique features of our vehicle. Finally, we are interested in examining other types of adaptations. This can involve swappable propulsors or other changes to the vehicle properties. Manipulation creates a great deal of flexibility, and we are broadly interested in how new types of vehicles can be designed to take advantage of manipulation based adaptation. 

“Using Manipulation to Enable Adaptive Ground Mobility,” by Raymond Kim, Alex Debate, Stephen Balakirsky, and Anirban Mazumdar from Georgia Tech, was presented at ICRA 2020.

[ Georgia Tech ]

Engagement is a concept of the utmost importance in human-computer interaction, not only for informing the design and implementation of interfaces, but also for enabling more sophisticated interfaces capable of adapting to users. While the notion of engagement is actively being studied in a diverse set of domains, the term has been used to refer to a number of related, but different concepts. In fact it has been referred to across different disciplines under different names and with different connotations in mind. Therefore, it can be quite difficult to understand what the meaning of engagement is and how one study relates to another one accordingly. Engagement has been studied not only in human-human, but also in human-agent interactions i.e., interactions with physical robots and embodied virtual agents. In this overview article we focus on different factors involved in engagement studies, distinguishing especially between those studies that address task and social engagement, involve children and adults, are conducted in a lab or aimed for long term interaction. We also present models for detecting engagement and for generating multimodal behaviors to show engagement.

Background: Clinical exoskeletal-assisted walking (EAW) programs for individuals with spinal cord injury (SCI) have been established, but many unknown variables remain. These include addressing staffing needs, determining the number of sessions needed to achieve a successful walking velocity milestone for ambulation, distinguishing potential achievement goals according to level of injury, and deciding the number of sessions participants need to perform in order to meet the Food and Drug Administration (FDA) criteria for personal use prescription in the home and community. The primary aim of this study was to determine the number of sessions necessary to achieve adequate EAW skills and velocity milestones, and the percentage of participants able to achieve these skills by 12 sessions and to determine the skill progression over the course of 36 sessions.

Methods: A randomized clinical trial (RCT) was conducted across three sites, in persons with chronic (≥6 months) non-ambulatory SCI. Eligible participants were randomized (within site) to either the EAW arm first (Group 1), three times per week for 36 sessions, striving to be completed in 12 weeks or the usual activity arm (UA) first (Group 2), followed by a crossover to the other arm for both groups. The 10-meter walk test seconds (s) (10MWT), 6-min walk test meters (m) (6MWT), and the Timed-Up-and-Go (s) (TUG) were performed at 12, 24, and 36 sessions. To test walking performance in the exoskeletal devices, nominal velocities and distance milestones were chosen prior to study initiation, and were used for the 10MWT (≤ 40s), 6MWT (≥80m), and TUG (≤ 90s). All walking tests were performed with the exoskeletons.

Results: A total of 50 participants completed 36 sessions of EAW training. At 12 sessions, 31 (62%), 35 (70%), and 36 (72%) participants achieved the 10MWT, 6MWT, and TUG milestones, respectively. By 36 sessions, 40 (80%), 41 (82%), and 42 (84%) achieved the 10MWT, 6MWT, and TUG criteria, respectively.

Conclusions: It is feasible to train chronic non-ambulatory individuals with SCI in performance of EAW sufficiently to achieve reasonable mobility skill outcome milestones.

To investigate how a robot's use of feedback can influence children's engagement and support second language learning, we conducted an experiment in which 72 children of 5 years old learned 18 English animal names from a humanoid robot tutor in three different sessions. During each session, children played 24 rounds in an “I spy with my little eye” game with the robot, and in each session the robot provided them with a different type of feedback. These feedback types were based on a questionnaire study that we conducted with student teachers and the outcome of this questionnaire was translated to three within-design conditions: (teacher) preferred feedback, (teacher) dispreferred feedback and no feedback. During the preferred feedback session, among others, the robot varied his feedback and gave children the opportunity to try again (e.g., “Well done! You clicked on the horse.”, “Too bad, you pressed the bird. Try again. Please click on the horse.”); during the dispreferred feedback the robot did not vary the feedback (“Well done!”, “Too bad.”) and children did not receive an extra attempt to try again; and during no feedback the robot did not comment on the children's performances at all. We measured the children's engagement with the task and with the robot as well as their learning gain, as a function of condition. Results show that children tended to be more engaged with the robot and task when the robot used preferred feedback than in the two other conditions. However, preferred or dispreferred feedback did not have an influence on learning gain. Children learned on average the same number of words in all conditions. These findings are especially interesting for long-term interactions where engagement of children often drops. Moreover, feedback can become more important for learning when children need to rely more on feedback, for example, when words or language constructions are more complex than in our experiment. The experiment's method, measurements and main hypotheses were preregistered.

The vast majority of drones are rotary-wing systems (like quadrotors), and for good reason: They’re cheap, they’re easy, they scale up and down well, and we’re getting quite good at controlling them, even in very challenging environments. For most applications, though, drones lose out to birds and their flapping wings in almost every way—flapping wings are very efficient, enable astonishing agility, and are much safer, able to make compliant contact with surfaces rather than shredding them like a rotor system does. But flapping wing have their challenges too: Making flapping-wing robots is so much more difficult than just duct taping spinning motors to a frame that, with a few exceptions, we haven’t seen nearly as much improvement as we have in more conventional drones.

In Science Robotics last week, a group of roboticists from Singapore, Australia, China, and Taiwan described a new design for a flapping-wing robot that offers enough thrust and control authority to make stable transitions between aggressive flight modes—like flipping and diving—while also being able to efficiently glide and gently land. While still more complex than a quadrotor in both hardware and software, this ornithopter’s advantages might make it worthwhile.

One reason that making a flapping-wing robot is difficult is because the wings have to move back and forth at high speed while electric motors spin around and around at high speed. This requires a relatively complex transmission system, which (if you don’t do it carefully), leads to weight penalties and a significant loss of efficiency. One particular challenge is that the reciprocating mass of the wings tends to cause the entire robot to flex back and forth, which alternately binds and disengages elements in the transmission system.

The researchers’ new ornithopter design mitigates the flexing problem using hinges and bearings in pairs. Elastic elements also help improve efficiency, and the ornithopter is in fact more efficient with its flapping wings than it would be with a rotary propeller-based propulsion system. Its thrust exceeds its 26-gram mass by 40 percent, which is where much of the aerobatic capability comes from. And one of the most surprising findings of this paper was that flapping-wing robots can actually be more efficient than propeller-based aircraft.

One of the most surprising findings of this paper was that flapping-wing robots can actually be more efficient than propeller-based aircraft

It’s not just thrust that’s a challenge for ornithopters: Control is much more complex as well. Like birds, ornithopters have tails, but unlike birds, they have to rely almost entirely on tail control authority, not having that bird-level of control over fine wing movements. To make an acrobatic level of control possible, the tail control surfaces on this ornithopter are huge—the tail plane area is 35 percent of the wing area. The wings can also provide some assistance in specific circumstances, as by combining tail control inputs with a deliberate stall of the things to allow the ornithopter to execute rapid flips.

With the ability to take off, hover, glide, land softly, maneuver acrobatically, fly quietly, and interact with its environment in a way that’s not (immediately) catastrophic, flapping-wing drones easily offer enough advantages to keep them interesting. Now that ornithopters been shown to be even more efficient than rotorcraft, the researchers plan to focus on autonomy with the goal of moving their robot toward real-world usefulness.

“Efficient flapping wing drone arrests high-speed flight using post-stall soaring,” by Yao-Wei Chin, Jia Ming Kok, Yong-Qiang Zhu, Woei-Leong Chan, Javaan S. Chahl, Boo Cheong Khoo, and Gih-Keong Lau from from Nanyang Technological University in Singapore, National University of Singapore, Defence Science and Technology Group in Canberra, Australia, Qingdao University of Technology in Shandong, China, University of South Australia in Mawson Lakes, and National Chiao Tung University in Hsinchu, Taiwan, was published in Science Robotics.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

AWS Cloud Robotics Summit – August 18-19, 2020 – [Virtual Conference] CLAWAR 2020 – August 24-26, 2020 – [Virtual Conference] ICUAS 2020 – September 1-4, 2020 – Athens, Greece ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Virtual Conference] IROS 2020 – October 25-29, 2020 – Las Vegas, Nevada ICSR 2020 – November 14-16, 2020 – Golden, Colorado

Let us know if you have suggestions for next week, and enjoy today’s videos.

Yesterday was a big day for what was quite possibly the most expensive robot on Earth up until it wasn’t on Earth anymore.

Perseverance and the Ingenuity helicopter are expected to arrive on Mars early next year.

[ JPL ]

ICYMI, our most popular post this week featured Northeastern University roboticist John Peter Whitney literally putting his neck on the line for science! He was testing a remotely operated straight razor shaving robotic system powered by fluidic actuators. The cutting-edge (sorry!) device transmits forces from a primary stage, operated by a barber, to a secondary stage, with the razor attached.

[ John Peter Whitney ]

Together with Boston Dynamics, Ford is introducing a pilot program into our Van Dyke Transmission Plant. Say hello to Fluffy the Robot Dog, who creates fast and accurate 3D scans that helps Ford engineers when we’re retooling our plants.

Not shown in the video: “At times, Fluffy sits on its robotic haunches and rides on the back of a small, round Autonomous Mobile Robot, known informally as Scouter. Scouter glides smoothly up and down the aisles of the plant, allowing Fluffy to conserve battery power until it’s time to get to work. Scouter can autonomously navigate facilities while scanning and capturing 3-D point clouds to generate a CAD of the facility. If an area is too tight for Scouter, Fluffy comes to the rescue.”

[ Ford ]

There is a thing that happens at 0:28 in this video that I have questions about.

[ Ghost Robotics ]

Pepper is far more polite about touching than most humans.

[ Paper ]

We don’t usually post pure simulation videos unless they give us something to get really, really excited about. So here’s a pure simulation video.

[ Hybrid Robotics ]

University of Michigan researchers are developing new origami inspired methods for designing, fabricating and actuating micro-robots using heat.These improvements will expand the mechanical capabilities of the tiny bots, allowing them to fold into more complex shapes.

[ DRSL ]

HMI is making beastly electric arms work underwater, even if they’re not stapled to a robotic submarine.

[ HMI ]

Here’s some interesting work in progress from MIT’s Biomimetics Robotics Lab. The limb is acting as a "virtual magnet" using a bimodal force and direction sensor.

Thanks Peter!

[ MIT Biomimetics Lab ]

This is adorable but as a former rabbit custodian I can assure you that approximately 3 seconds after this video ended, all of the wires on that robot were chewed to bits.

[ Lingkang Zhang ]

During the ARCHE 2020 integration week, TNO and the ETH Robot System Lab (RSL) collaborated to integrate their research and development process using the Articulated Locomotion and MAnipulation (ALMA) robot. Next to the integration of software, we tested software to confirm proper implementation and development. We also captured visual and auditory data for future software development. This all resulted in the creation of multiple demo’s to show the capabilities of the teleoperation framework using the ALMA robot.

[ RSL ]

When we talk about practical applications quadrupedal robots with foot wheels, we don’t usually think about them on this scale, although we should.

[ RSL ]

Juan wrote in to share a DIY quadruped that he’s been working on, named CHAMP.

Juan says that the demo robot can be built in less than US $1000 with easily accessible parts. “I hope that my project can provide a more accessible platform for students, researchers, and enthusiasts who are interested to learn more about quadrupedal robot development and its underlying technology.”


Thanks Juan!

Here’s a New Zealand TV report about a study on robot abuse from Christoph Bartneck at the University of Canterbury.

[ Paper ]

Our Robotics Studio is a hands on class exposing students to practical aspects of the design, fabrication, and programming of physical robotic systems. So what happens when the class goes virtual due to the covid-19 virus? Things get physical -- all @ home.

[ Columbia ]

A few videos from the Supernumerary Robotic Devices Workshop, held online earlier this month.

“Handheld Robots: Bridging the Gap between Fully External and Wearable Robots,” presented by Walterio Mayol-Cuevas, University of Bristol.

“Playing the Piano with 11 Fingers: The Neurobehavioural Constraints of Human Robot Augmentation,” presented by Aldo Faisal, Imperial College London.

[ Workshop ]

Robotic agents should be able to learn from sub-symbolic sensor data and, at the same time, be able to reason about objects and communicate with humans on a symbolic level. This raises the question of how to overcome the gap between symbolic and sub-symbolic artificial intelligence. We propose a semantic world modeling approach based on bottom-up object anchoring using an object-centered representation of the world. Perceptual anchoring processes continuous perceptual sensor data and maintains a correspondence to a symbolic representation. We extend the definitions of anchoring to handle multi-modal probability distributions and we couple the resulting symbol anchoring system to a probabilistic logic reasoner for performing inference. Furthermore, we use statistical relational learning to enable the anchoring framework to learn symbolic knowledge in the form of a set of probabilistic logic rules of the world from noisy and sub-symbolic sensor input. The resulting framework, which combines perceptual anchoring and statistical relational learning, is able to maintain a semantic world model of all the objects that have been perceived over time, while still exploiting the expressiveness of logical rules to reason about the state of objects which are not directly observed through sensory input data. To validate our approach we demonstrate, on the one hand, the ability of our system to perform probabilistic reasoning over multi-modal probability distributions, and on the other hand, the learning of probabilistic logical rules from anchored objects produced by perceptual observations. The learned logical rules are, subsequently, used to assess our proposed probabilistic anchoring procedure. We demonstrate our system in a setting involving object interactions where object occlusions arise and where probabilistic inference is needed to correctly anchor objects.

Engineers have been chasing a form of AI that could drastically lower the energy required to do typical AI things like recognize words and images. This analog form of machine learning does one of the key mathematical operations of neural networks using the physics of a circuit instead of digital logic. But one of the main things limiting this approach is that deep learning’s training algorithm, back propagation, has to be done by GPUs or other separate digital systems.

Now University of Montreal AI expert Yoshua Bengio, his student Benjamin Scellier, and colleagues at startup Rain Neuromorphics have come up with way for analog AIs to train themselves. That method, called equilibrium propagation, could lead to continuously learning, low-power analog systems of a far greater computational ability than most in the industry now consider possible, according to Rain CTO Jack Kendall.

Analog circuits could save power in neural networks in part because they can efficiently perform a key calculation, called multiply and accumulate. That calculation multiplies values from inputs according to various weights, and then it sums all those values up. Two fundamental laws of electrical engineering can basically do that, too. Ohm’s Law multiplies voltage and conductance to give current, and Kirchoff’s Current Law sums the currents entering a point. By storing a neural network’s weights in resistive memory devices, such as memristors, multiply-and-accumulate can happen completely in analog, potentially reducing power consumption by orders of magnitude.

The reason analog AI systems can’t train themselves today has a lot to do with the variability of their components. Just like real neurons, those in analog neural networks don’t all behave exactly alike. To do back propagation with analog components, you must build two separate circuit pathways. One going forward to come up with an answer (called inferencing), the other going backward to do the learning so that the answer becomes more accurate. But because of the variability of analog components, the pathways don't match up.

“You end up accumulating error as you go backwards through the network,” says Bengio. To compensate, a network would need lots of power-hungry analog-to-digital and digital-to-analog circuits, defeating the point of going analog.

Equilibrium propagation allows learning and inferencing to happen on the same network, partly by adjusting the behavior of the network as a whole. “What [equilibrium propagation] allows us to do is to say how we should modify each of these devices so that the overall circuit performs the right thing,” he says. “We turn the physical computation that is happening in the analog devices directly to our advantage.”

Right now, equilibrium propagation is only working in simulation. But Rain plans to have a hardware proof-of-principle in late 2021, according to CEO and cofounder Gordon Wilson. “We are really trying to fundamentally reimagine the hardware computational substrate for artificial intelligence, find the right clues from the brain, and use those to inform the design of this,” he says. The result could be what they call end-to-end analog AI systems that capable of running sophisticated robots or even playing a role in data centers. Both of those are currently considered beyond the capabilities of analog AI, which is now focused only on adding inferencing abilities to sensors and other low-power “edge” devices, while leaving the learning to GPUs.

In this study, the sources of EEG activity in motor imagery brain–computer interface (BCI) control experiments were investigated. Sixteen linear decomposition methods for EEG source separation were compared according to different criteria. The criteria were mutual information reduction between the source activities and physiological plausibility. The latter was tested by estimating the dipolarity of the source topographic maps, i.e., the accuracy of approximating the map by potential distribution from a single current dipole, as well as by the specificity of the source activity for different motor imagery tasks. The decomposition methods were also compared according to the number of shared components found. The results indicate that most of the dipolar components are found by the Independent Component Analysis Methods AMICA and PWCICA, which also provided the highest information reduction. These two methods also found the most task-specific EEG patterns of the blind source separation algorithms used. They are outperformed only by non-blind Common Spatial Pattern methods in terms of pattern specificity. The components found by all of the methods were clustered using the Attractor Neural Network with Increasing Activity. The results of the cluster analysis revealed the most frequent patterns of electrical activity occurring in the experiments. The patterns reflect blinking, eye movements, sensorimotor rhythm suppression during the motor imagery, and activations in the precuneus, supplementary motor area, and premotor areas of both hemispheres. Overall, multi-method decomposition with subsequent clustering and task-specificity estimation is a viable and informative procedure for processing the recordings of electrophysiological experiments.

iRobot has been on a major push into education robots recently. They acquired Root Robotics in 2019, and earlier this year, launched an online simulator and associated curriculum designed to work in tandem with physical Root robots. The original Root was intended to be a classroom robot, with one of its key features being the ability to stick to (and operate on) magnetic virtual surfaces, like whiteboards. And as a classroom robot, at $200, it’s relatively affordable, if you can buy one or two and have groups of kids share them.

For kids who are more focused on learning at home, though, $200 is a lot for a robot that doesn't even keep your floors clean. And as nice as it is to have a free simulator, any kid will tell you that it’s way cooler to have a real robot to mess around with. Today, iRobot is announcing a new version of Root that’s been redesigned for home use, with a $129 price that makes it significantly more accessible to folks outside of the classroom.

The Root rt0 is a second version of the Root robot—the more expensive, education-grade Root rt1 is still available. To bring the cost down, the rt0 is missing some features that you can still find in the rt1. Specifically, you don’t get the internal magnets to stick the robot to vertical surfaces, there are no cliff sensors, and you don’t get a color scanner or an eraser. But for home use, the internal magnets are probably not necessary anyway, and the rest of that stuff seems like a fair compromise for a cost reduction of 30 percent.

Photo: iRobot One of the new accessories for the iRobot Root rt0 is a “Brick Top” that snaps onto the upper face the robot via magnets. The accessory can be used with LEGOs and other LEGO-compatible bricks, opening up an enormous amount of customization.

It’s not all just taking away, though. There’s also a new $20 accessory, a LEGO-ish “Brick Top” that snaps onto the upper face of Root (either version) via magnets. The plate can be used with LEGO bricks and other LEGO-compatible things. This opens up an enormous amount of customization, and it’s for more than just decoration, since Root rt0 has the ability to interact with whatever’s on top of it via its actuated marker. Root can move the marker up and down, the idea being that you can programmatically turn lines on and off. By replacing the marker with a plastic thingy that sticks up through the body of the robot, the marker up/down command can be used to actuate something on the brick top. In the video, that’s what triggers the catapult.

Photo: iRobot By attaching a marker, you can program Root to draw. The robot has a motor that can move the marker up and down. 

This less expensive version of Root still has access to the online simulator, as well as the multi-level coding interface that allows kids to seamlessly transition through multiple levels of coding complexity, from graphical to text. There’s a new Android app coming out today, and you can access everything through web-based apps on Chrome OS, Windows and macOS, as well as on iOS. iRobot tells us that they’ve also recently expanded their online learning library full of Root-based educational activities. In particular, they’ve added a new category on “Social Emotional Learning,” the goal of which is to help kids develop things like social awareness, self-management, decision making, and relationship skills. We’re not quite sure how you teach those things with a little hexagonal robot, but we like that iRobot is giving it a try.

Root coding robots are designed for kids age 6 and up, ships for free, and is available now.

[ iRobot Root ]

Roboticists love hard problems. Challenges like the DRC and SubT have helped (and are still helping) to catalyze major advances in robotics, but not all hard problems require a massive amount of DARPA funding—sometimes, a hard problem can just be something very specific that’s really hard for a robot to do, especially relative to the ease with which a moderately trained human might be able to do it. Catching a ball. Putting a peg in a hole. Or using a straight razor to shave someone’s face without Sweeney Todd-izing them.

This particular roboticist who sees straight-razor face shaving as a hard problem that robots should be solving is John Peter Whitney, who we first met back at IROS 2014 in Chicago when (working at Disney Research) he introduced an elegant fluidic actuator system. These actuators use tubes containing a fluid (like air or water) to transmit forces from a primary robot to a secondary robot in a very efficient way that also allows for either compliance or very high fidelity force feedback, depending on the compressibility of the fluid. 

Photo: John Peter Whitney/Northeastern University Barber meets robot: Boston based barber Jesse Cabbage [top, right] observes the machine created by roboticist John Peter Whitney. Before testing the robot on Whitney’s face, they used his arm for a quick practice [bottom].

Whitney is now at Northeastern University, in Boston, and he recently gave a talk at the RSS workshop on “Reacting to Contact,” where he suggested that straight razor shaving would be an interesting and valuable problem for robotics to work toward, due to its difficulty and requirement for an extremely high level of both performance and reliability.

Now, a straight razor is sort of like a safety razor, except with the safety part removed, which in fact does make it significantly less safe for humans, much less robots. Also not ideal for those worried about safety is that as part of the process the razor ends up in distressingly close proximity to things like the artery that is busily delivering your brain’s entire supply of blood, which is very close to the top of the list of things that most people want to keep blades very far away from. But that didn’t stop Whitney from putting his whiskers where his mouth is and letting his robotic system mediate the ministrations of a professional barber. It’s not an autonomous robotic straight-razor shave (because Whitney is not totally crazy), but it’s a step in that direction, and requires that the hardware Whitney developed be dead reliable.

Perhaps that was a poor choice of words. But, rest assured that Whitney lived long enough to answer our questions after. Here’s the video; it’s part of a longer talk, but it should start in the right spot, at about 23:30.

If Whitney looked a little bit nervous to you, that’s because he was. “This was the first time I’d ever been shaved by someone (something?!) else with a straight razor,” he told us, and while having a professional barber at the helm was some comfort, “the lack of feeling and control on my part was somewhat unsettling.” Whitney says that the barber, Jesse Cabbage of Dentes Barbershop in Somerville, Mass., was surprised by how well he could feel the tactile sensations being transmitted from the razor. “That’s one of the reasons we decided to make this video,” Whitney says. “I can’t show someone how something feels, so the next best thing is to show a delicate task that either from experience or intuition makes it clear to the viewer that the system must have these properties—otherwise the task wouldn’t be possible.”

And as for when Whitney might be comfortable getting shaved by a robotic system without a human in the loop? It’s going to take a lot of work, as do most other hard problems in robotics. “There are two parts to this,” he explains. “One is fault-tolerance of the components themselves (software, electronics, etc.) and the second is the quality of the perception and planning algorithms.”

He offers a comparison to self-driving cars, in which similar (or greater) risks are incurred: “To learn how to perceive, interpret, and adapt, we need a very high-fidelity model of the problem, or a wealth of data and experience, or both” he says. “But in the case of shaving we are greatly lacking in both!” He continues with the analogy: “I think there is a natural progression—the community started with autonomous driving of toy cars on closed courses and worked up to real cars carrying human passengers; in robotic manipulation we are beginning to move out of the ‘toy car’ stage and so I think it’s good to target high-consequence hard problems to help drive progress.”

The ultimate goal is much more general than the creation of a dedicated straight razor shaving robot. This particular hardware system is actually a testbed for exploring MRI-compatible remote needle biopsy.

Of course, the ultimate goal here is much more general than the creation of a dedicated straight razor shaving robot; it’s a challenge that includes a host of sub-goals that will benefit robotics more generally. This particular hardware system Whitney is developing is actually a testbed for exploring MRI-compatible remote needle biopsy, and he and his students are collaborating with Brigham and Women’s Hospital in Boston on adapting this technology to prostate biopsy and ablation procedures. They’re also exploring how delicate touch can be used as a way to map an environment and localize within it, especially where using vision may not be a good option. “These traits and behaviors are especially interesting for applications where we must interact with delicate and uncertain environments,” says Whitney. “Medical robots, assistive and rehabilitation robots and exoskeletons, and shared-autonomy teleoperation for delicate tasks.”
A paper with more details on this robotic system, “Series Elastic Force Control for Soft Robotic Fluid Actuators,” is available on arXiv.

We analyze the efficacy of modern neuro-evolutionary strategies for continuous control optimization. Overall, the results collected on a wide variety of qualitatively different benchmark problems indicate that these methods are generally effective and scale well with respect to the number of parameters and the complexity of the problem. Moreover, they are relatively robust with respect to the setting of hyper-parameters. The comparison of the most promising methods indicates that the OpenAI-ES algorithm outperforms or equals the other algorithms on all considered problems. Moreover, we demonstrate how the reward functions optimized for reinforcement learning methods are not necessarily effective for evolutionary strategies and vice versa. This finding can lead to reconsideration of the relative efficacy of the two classes of algorithm since it implies that the comparisons performed to date are biased toward one or the other class.

In the world of academics, peer review is considered the only credible validation of scholarly work. Although the process has its detractors, evaluation of academic research by a cohort of contemporaries has endured for over 350 years, with “relatively minor changes.” However, peer review may be set to undergo its biggest revolution ever—the integration of artificial intelligence.

Open-access publisher Frontiers has debuted an AI tool called the Artificial Intelligence Review Assistant (AIRA), which purports to eliminate much of the grunt work associated with peer review. Since the beginning of June 2020, every one of the 11,000-plus submissions Frontiers received has been run through AIRA, which is integrated into its collaborative peer-review platform. This also makes it accessible to external users, accounting for some 100,000 editors, authors, and reviewers. Altogether, this helps “maximize the efficiency of the publishing process and make peer-review more objective,” says Kamila Markram, founder and CEO of Frontiers.

AIRA’s interactive online platform, which is a first of its kind in the industry, has been in development for three years.. It performs three broad functions, explains Daniel Petrariu, director of project management: assessing the quality of the manuscript, assessing quality of peer review, and recommending editors and reviewers. At the initial validation stage, the AI can make up to 20 recommendations and flag potential issues, including language quality, plagiarism, integrity of images, conflicts of interest, and so on. “This happens almost instantly and with [high] accuracy, far beyond the rate at which a human could be expected to complete a similar task,” Markram says.

“We have used a wide variety of machine-learning models for a diverse set of applications, including computer vision, natural language processing, and recommender systems,” says Markram. This includes simple bag-of-words models, as well as more sophisticated deep-learning ones. AIRA also leverages a large knowledge base of publications and authors.

Markram notes that, to address issues of possible AI bias, “We…[build] our own datasets and [design] our own algorithms. We make sure no statistical biases appear in the sampling of training and testing data. For example, when building a model to assess language quality, scientific fields are equally represented so the model isn’t biased toward any specific topic.” Machine- and deep-learning approaches, along with feedback from domain experts, including errors, are captured and used as additional training data. “By regularly re-training, we make sure our models improve in terms of accuracy and stay up-to-date.”

The AI’s job is to flag concerns; humans take the final decisions, says Petrariu. As an example, he cites image manipulation detection—something AI is super-efficient at but is nearly impossible for a human to perform with the same accuracy. “About 10 percent of our flagged images have some sort of problem,” he adds. “[In academic publishing] nobody has done this kind of comprehensive check [using AI] before,” says Petrariu. AIRA, he adds, facilitates Frontiers’ mission to make science open and knowledge accessible to all.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

AWS Cloud Robotics Summit – August 18-19, 2020 – [Online Conference] CLAWAR 2020 – August 24-26, 2020 – [Virtual Conference] ICUAS 2020 – September 1-4, 2020 – Athens, Greece ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan IROS 2020 – October 25-29, 2020 – Las Vegas, Nevada ICSR 2020 – November 14-16, 2020 – Golden, Colorado

Let us know if you have suggestions for next week, and enjoy today’s videos.

Here are some professional circus artists messing around with an industrial robot for fun, like you do.

The acrobats are part of Östgötateatern, a Swedish theatre group, and the chair bit got turned into its own act, called “The Last Fish.” But apparently the Swedish Work Environment Authority didn’t like that an industrial robot—a large ABB robotic arm—was being used in an artistic performance, arguing that the same safety measures that apply in a factory setting would apply on stage. In other words, the robot had to operate inside a protective cage and humans could not physically interact with it.

When told that their robot had to be removed, the acrobats went to court. And won! At least that’s what we understand from this Swedish press release. The court in Linköping, in southern Sweden, ruled that the safety measures taken by the theater had been sufficient. The group had worked with a local robotics firm, Dyno Robotics, to program the manipulator and learn how to interact with it as safely as possible. The robot—which the acrobats say is the eighth member of their troupe—will now be allowed to return.

Östgötateatern ]

Houston Mechathronics’ Aquanaut continues to be awesome, even in the middle of a pandemic. It’s taken the big step (big swim?) out of NASA’s swimming pool and into open water.

[ HMI ]

Researchers from Carnegie Mellon University and Facebook AI Research have created a navigation system for robots powered by common sense. The technique uses machine learning to teach robots how to recognize objects and understand where they’re likely to be found in house. The result allows the machines to search more strategically.

[ CMU ]

Cassie manages 2.1 m/s, which is uncomfortably fast in a couple of different ways.

Next, untethered. After that, running!

[ Michigan Robotics ]

Engineers at Caltech have designed a new data-driven method to control the movement of multiple robots through cluttered, unmapped spaces, so they do not run into one another.

Multi-robot motion coordination is a fundamental robotics problem with wide-ranging applications that range from urban search and rescue to the control of fleets of self-driving cars to formation-flying in cluttered environments. Two key challenges make multi-robot coordination difficult: first, robots moving in new environments must make split-second decisions about their trajectories despite having incomplete data about their future path; second, the presence of larger numbers of robots in an environment makes their interactions increasingly complex (and more prone to collisions).

To overcome these challenges, Soon-Jo Chung, Bren Professor of Aerospace, and Yisong Yue, professor of computing and mathematical sciences, along with Caltech graduate student Benjamin Rivière (MS ’18), postdoctoral scholar Wolfgang Hönig, and graduate student Guanya Shi, developed a multi-robot motion-planning algorithm called "Global-to-Local Safe Autonomy Synthesis," or GLAS, which imitates a complete-information planner with only local information, and "Neural-Swarm," a swarm-tracking controller augmented to learn complex aerodynamic interactions in close-proximity flight.

[ Caltech ]

Fetch RoboticsFreight robot is now hauling around pulsed xenon UV lamps to autonomously disinfect spaces with UV-A, UV-B, and UV-C, all at the same time.

[ SmartGuard UV ]

When you’re a vertically symmetrical quadruped robot, there is no upside-down.

[ Ghost Robotics ]

In the virtual world, the objects you pick up do not exist: you can see that cup or pen, but it does not feel like you’re touching them. That presented a challenge to EPFL professor Herbert Shea. Drawing on his extensive experience with silicone-based muscles and motors, Shea wanted to find a way to make virtual objects feel real. “With my team, we’ve created very small, thin and fast actuators,” explains Shea. “They are millimeter-sized capsules that use electrostatic energy to inflate and deflate.” The capsules have an outer insulating membrane made of silicone enclosing an inner pocket filled with oil. Each bubble is surrounded by four electrodes, that can close like a zipper. When a voltage is applied, the electrodes are pulled together, causing the center of the capsule to swell like a blister. It is an ingenious system because the capsules, known as HAXELs, can move not only up and down, but also side to side and around in a circle. “When they are placed under your fingers, it feels as though you are touching a range of different objects,” says Shea.

[ EPFL ]

Through the simple trick of reversing motors on impact, a quadrotor can land much more reliably on slopes.

[ Sherbrooke ]

Turtlebot delivers candy at Harvard.

I <3 Turtlebot SO MUCH

[ Harvard ]

Traditional drone controllers are a little bit counterintuitive, because there’s one stick that’s forwards and backwards and another stick that’s up and down but they’re both moving on the same axis. How does that make sense?! Here’s a remote that gives you actual z-axis control instead.

[ Fenics ]

Thanks Ashley!

Lio is a mobile robot platform with a multifunctional arm explicitly designed for human-robot interaction and personal care assistant tasks. The robot has already been deployed in several health care facilities, where it is functioning autonomously, assisting staff and patients on an everyday basis.

[ F&P Robotics ]

Video shows a ground vehicle autonomously exploring and mapping a multi-storage garage building and a connected patio on Carnegie Mellon University campus. The vehicle runs onboard state estimation and mapping leveraging range, vision, and inertial sensing, local planning for collision avoidance, and terrain analysis. All processing is real-time and no post-processing involved. The vehicle drives at 2m/s through the exploration run. This work is dedicated to DARPA Subterranean Challange.

[ CMU ]

Raytheon UK’s flagship STEM programme, the Quadcopter Challenge, gives 14-15 year olds the chance to participate in a hands-on, STEM-based engineering challenge to build a fully operational quadcopter. Each team is provided with an identical kit of parts, tools and instructions to build and customise their quadcopter, whilst Raytheon UK STEM Ambassadors provide mentoring, technical support and deliver bite-size learning modules to support the build.

[ Raytheon ]

A video on some of the research work that is being carried out at The Australian Centre for Field Robotics, University of Sydney.

[ University of Sydney ]

Jeannette Bohg, assistant professor of computer science at Stanford University, gave one of the Early Career Award Keynotes at RSS 2020.

[ RSS 2020 ]

Adam Savage remembers Grant Imahara.

[ Tested ]