IEEE Spectrum Automation

IEEE Spectrum
Subscribe to IEEE Spectrum Automation feed IEEE Spectrum Automation


In the late 1980s, Rod Brooks and Anita Flynn published a paper in The Journal of the British Interplanetary Society with the amazing title of Fast, Cheap, and Out of Control: A Robotic Invasion of the Solar System. The paper explored the idea that instead of sending one big and complicated and extremely expensive robot to explore (say) the surface of Mars, you could instead send a whole bunch of little and simple and extremely cheap robots, while still accomplishing mission goals. The abstract of the paper concludes: “We suggest that within a few years it will be possible at modest cost to invade a planet with millions of tiny robots.”

That was 1989, we’re still nowhere near millions of tiny robots. Some things are just really hard to scale down, and building robots that are the size of bees or flies or even gnats requires advances in (among other things) sensing for autonomy as well as appropriate power systems. But progress is being made, and Sawyer Fuller, assistant professor at the University of Washington (who knows a thing or four about insect-scale flying robots), has a new article in Science Robotics that shows how it’s possible to put together the necessary sensing hardware to enable stable, autonomous flight for flying robots smaller than a grain of rice.

For a tiny flying robot to be autonomous (or for any flying robot to be autonomous, really) it needs to be able to maintain its own stability, using sensors to keep track of where it is and make sure that it doesn’t go anywhere that it doesn’t want to go. This is especially tricky for small-scale flying robots, because they can be pushed around by air currents or turbulence that larger robots can simply ignore. But it turns out that being tiny also has some advantages: Because the drag of the air itself becomes more dominant the smaller an aircraft gets, an onboard gyroscope becomes irrelevant, and you just need an accelerometer. Tie that to an optic flow camera to track motion, along with a microcontroller to do the computation, and you have everything you need.

Sawyer B. Fuller

The camera in the picture above is, somewhat incredibly, available off-the-shelf. It’s designed primarily to explore your insides, which is why the entire camera is only 0.65 millimeters tall and wide, 1.2 mm long, and weighs 1 milligram (including its multi-element lens). The sensor on this particular camera exceeds the power budget that the researchers are targeting, probably because its intended use case does not involve tiny robots with tinier batteries, but there are existing sensors of a similar size that would work.

In total, this hardware weighs 6.2 mg and uses 167 microwatts of power, which in theory could be suitable for a 10 mg flying robot, something about the size of a chonky gnat. Figuring out whether it all actually works in practice isn’t easy, since chonky robotic gnats don’t exist, so the researchers instead used a palm-sized drone running simulated sensors. Testing showed that the system was able to successfully estimate the attitude of the drone and also detect and reject disturbances from wind. In fact, its performance was comparable to an actual fruit fly, which is impressive considering how long the fruit fly has had to refine its design.

“Reducing drone size down to gnat scale only amplifies many of the benefits of insect scale,” Fuller says, “such as greater potential to harvest all needed energy from the environment and larger deployments.” Much like Brooks and Flynn’s vision for swarms of inexpensive robots, Fuller sees the kind of gnat-sized robots that these sensors will help enable as a completely new approach to autonomous exploration. “Small flying robotic insects will revolutionize low-altitude atmospheric ‘air telemetry’—remote sensing of air composition and flow—by doing so on a much more detailed and persistent basis than is possible now. They will power themselves from the sun or indoor lighting—which favors small scale. The chemical sensor might be an insect antenna, which my group demonstrated in the ‘smellicopter.’ Applications include early detection of forest fires, pest onset in agriculture, buried explosives, or mapping hazardous volatiles to find leaks of greenhouse gasses or the spread of airborne diseases.”

And if you find the whole “Fast, Cheap, and Out of Control” thing compelling and want to watch a very strange movie of the same name from 1997 featuring Rod Brooks, a lion tamer, a topiary artist, and a naked mole rat expert, here you go.



Eight years and 14 million views ago, ETH Zurich introduced the Cubli, a robotic cube that can dynamically balance on a single point. It’s magical to watch, but at the same time, fairly straightforward to understand: there are three reaction wheels within the Cubli, one for each axis. And in a vivid demonstration of Newton’s third law, spinning up a reaction wheel exerts a force on the cube in the opposite direction, resulting in precision control over roll, pitch and yaw that allows the Cubli to balance itself, move around, and even jump.

This is very cool, but obviously, controlling the Cubli in three axes requires three reaction wheels. If you took out a reaction wheel, one of the Cubli’s axes would just do whatever it wanted, and if you took out two reaction wheels, then surely it would topple over, right?

Right…?

Figuring that an appropriate number of actuated degrees of freedom for a self-balancing cube was somehow too easy, researchers from ETH Zurich (Matthias Hofer, Michael Muehlebach, and Raffaello D’Andrea) decided to build a One-Wheel Cubli, which manages to balance on a point just like the original Cubli, except with only one single reaction wheel. Whoa.


The One-Wheel Cubli (OWC) uses its single reaction wheel to control itself in both pitch and roll. The yaw degree is uncontrolled, meaning that the OWC can spin around on its pivot point, although thanks to friction, it doesn’t. Having more degrees of freedom than actuators (in this case, reaction wheels) means that the OWC is what’s called underactuated. But obviously, full control over two very separate axes is required to pull off this balancing act. So how does it work?

Designer Matthias Hofer explains that you can think of the One-Wheel Cubli’s balancing like trying to balance both a pen and a broomstick vertically on your palm, if you also imagine that you only have to worry about balancing them on one axis—as in, they’ll only tip towards you or away from you, and you can move your palm underneath them to compensate. The pen, shorter, will be harder to balance and require small, rapid movements of your palm. Meanwhile, the longer broomstick is much easier to balance, and you can do so with slower movements. This is essentially the working principle of the OWC: you may only have one control input to work with, but the small fast movements and the large slow movements are decoupled enough that one actuator can manage them both independently, by making the small fast movements within the large slow movements. And this, incidentally, is the reason for that long beam with the weights on the end that differentiates the One-Wheel Cubli from the original Cubli: it’s there to maximize that difference in inertia between the two axes you’re trying to independently control.

“Seeing the OWC balance for the first time was counter-intuitive as the working principle is not obvious,” Hofer told IEEE Spectrum. “It was very satisfying for us, as it meant that every puzzle piece of the project that Michael Muehlebach, Raffaello D’Andrea, and I, along with our technical staff (Michael Egli and Matthias Müller), contributed to finally worked—including the theoretical analysis, the prototype development, the modeling, the state estimation, and the control design.”

All those puzzle pieces took a long time to fit together, and required years of work to get from something that would theoretically work on paper to an actual working system. After the failure of a couple of early hardware iterations, the researchers put some extra effort into a much more detailed modeling approach, which they then leveraged into the control system that was ultimately successful. One of the most important tricks, it turned out, was to carefully model exactly how the beam with the weights on the ends of it deflects. The deflection isn’t much, but it’s enough to screw everything up if you’re not careful. And as you can see in the video, the control system is successful enough that despite the underactuated nature of the OWC, it’s even able to compensate for some gentle nudging.

The One-Wheel Cubli is more than just an abstract hardware and software project. There are potential useful applications here, one of which is attitude control of satellites. Many satellites already use reaction wheels to keep them pointing in the right direction, and these reaction wheels are so critical to a satellite’s functionality that spares are typically included, which adds mass and complexity. For satellites that have long structures (like instrument booms) that provide different mass moments of inertia along different axes, the OWC’s control technique could provide an additional means of redundancy in case of multiple failures of reaction wheels.

We asked Hofer about what he might like to work on next, and it sounds like taming that uncontrolled yaw axis is a potential way to go. “An interesting extension would be to also control the yaw degree of freedom,” Hofer says. “If the reaction wheel is not mounted orthogonally to the yaw direction, it would affect both tilt angles plus the yaw direction. If all three degrees of freedom have different mass moments of inertia, the current working principle of the OWC could possibly be extended to all three degrees of freedom.”



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

CoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALANDICRA 2023: 29 May–2 June 2023, LONDONRoboCup 2023: 4–10 July 2023, BORDEAUX, FRANCERSS 2023: 10–14 July 2023, DAEGU, KOREAIEEE RO-MAN 2023: 28–31 August 2023, BUSAN, KOREA

Enjoy today’s videos!

Liquid metal and hydrogel combine to make a soft, inflatable actuator that runs entirely on electricity without relying on external pumps.

[ Paper ] via [ NC State ]

Happy 10th anniversary to Jamie Paik’s Reconfigurable Robotics Lab at EPFL!

[ RRL ]

The manufacturing industry (largely) welcomed artificial intelligence with open arms. Less of the dull, dirty, and dangerous? Say no more. Planning for mechanical assemblies still requires more than scratching out some sketches, of course - it’s a complex conundrum that means dealing with arbitrary 3D shapes and highly constrained motion required for real-world assemblies.
In a quest to ease some of said burdens, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Autodesk Research, and Texas A&M University came up with a method to automatically assemble products that’s accurate, efficient and generalizable to a wide range of complex real-world assemblies. Their algorithm efficiently determines the order for multi-part assembly, and then searches for a physically realistic motion path for each step.

[ MIT CSAIL ]

Thanks, Rachel!

Xenoforms is an installation work that consists of 3D prints of parametric models, video animations and visualizations, posters with technical diagrams, and a 6-axis 3D printer. In this work, a series of three-dimensional forms have been automatically generated by an artificial system that attempts to identify design decisions for an efficient, sustainable, and durable structure. The work provides a speculative scenario that demonstrates how an autonomous A.I. system follows its own strategies for colonizing the architectural space and as an extension to become a human symbiont.

Xenoforms is a collaboration between Flexiv (and its Rizon arm) and artist Stavros Didakis.

[ Sonicon Lab ] via [ Flexiv ]

Thanks, Noah!

The latest buzz at the University of Maryland? Tiny, autonomous drones that harness the power of artificial intelligence to work together. In this case, the minute robots could one day provide backup to pollinators like honey bees, potentially securing the world’s food crops as these critical insect species face challenges from fungal disease, pesticides and climate change. The project is led by doctoral student Chahat Deep Singh M.E. ’18 of the Perception and Robotics Group, led by Professor Yiannis Aloimonos and Research Scientist Cornelia Fermüller.

[ UMD ]

iRobot has a museum, which is a lot more interesting than you might think, because iRobot spends a very long time making things that are really, really not vacuums. And make sure to look closely at some of the earliest robots, because they in fact predate iRobot itself.

Some of those robots still have “IS Robotics” branding on them, which was the name of the company that Rod Brooks (and his students Colin Angle and Helen Greiner) founded in 1990. It wasn’t called “iRobot” until 2000. IT, in particular, was still part of Brooks’ lab at MIT in the mid-1990s, and was featured on a 1996 episode of “Scientific American Frontiers” which I just found on YouTube. There’s also a clip of Marc Raibert from 1987!

And just a little more of the best stuff from the museum:

[ iRobot ]

The ANYexo 2.0 is our latest prototype based on around two decades of research at the Sensory-Motor Systems Lab and Robotic Systems Lab of ETH Zürich. This video shows uncommented impressions of the main features of ANYexo 2.0 and its performance in range of motion, speed, strength, haptic transparency, and human-robot attachment system.

[ ETH Zurich ]

Here are four of the finalists of this year’s KUKA Innovation Award.

[ KUKA ]

How soft should a robot foot be, anyway?

[ GVLab ]

At ANYbotics, we constantly release exciting new software features and payloads to our customers. The December 2022 update introduces major product developments that make it easier for operators to operate ANYmal, monitor gas leakages, perform high-precision reality capture, attain more insight from thermal measurements, and cover wider areas through new mobility features.

[ ANYbotics ]

Take a tour through our new ABB Robotics mega factory in Shanghai, China and see how we’re bringing the physical and digital worlds together for faster, more resilient and more efficient manufacturing and research.

[ ABB ]

On December 1, 2022, alum UMD Zhen Zeng of JP Morgan AI Research talked to Robotics students as a speaker in the Undergraduate Robotics Pathways & Careers Speaker Series, which aims to answer the question: “What can I do with a robotics degree?”

[ UMich ]

This talk is from Nitin Sanket at WPI, on “AI-Powered Robotic Bees: A Journey Into The Mind And Body!”

The human fascination to mimic ultra-efficient living beings like insects and birds has led to the rise of small autonomous robots. Smaller robots are safer, more agile and are task-distributable as swarms. One might wonder, why do we not have small robots deployed in the wild today? I will present how the world’s first prototype of a RoboBeeHive was built using this philosophy. Finally, I will conclude with a recent theory called Novel Perception that utilizes the statistics of motion fields to tackle various class of problems from navigation and interaction. This method has the potential to be the go-to mathematical formulation for tackling the class of motion-field-based problems in robotics.

[ UPenn ]



When Xiaomi announced its CyberOne humanoid robot a couple of months back, it wasn’t entirely clear what the company was actually going to do with the robot. Our guess was that rather than pretending that CyberOne was going to have some sort of practical purpose, Xiaomi would use it as a way of exploring possibilities with technology that may have useful applications elsewhere, but there were no explicit suggestions that there would be any actual research to come out of it. In a nice surprise, Xiaomi roboticists have taught the robot to do something that is, if not exactly useful, at least loud: to play the drums.

The input for this performance is a MIDI file, which the robot is able to parse into drum beats. It then generates song-length sequences of coordinated whole-body trajectories which are synchronized to the music, which is tricky because the end effectors have to make sure to actuate the drums exactly on the beat. CyberOne does a pretty decent job even when it’s going back and forth across the drum kit. This is perhaps not super cutting-edge humanoid research, but it’s still interesting to see what a company like Xiaomi has been up to. And to that end, we asked Zeyu Ren, a senior hardware engineer at the Xiaomi Robotics Lab, to answer a couple of questions for us.

IEEE Spectrum: So why is Xiaomi working on a humanoid robot, anyway?

Zeyu Ren: There are three reasons why Xiaomi is working on humanoid robots. The first reason is that we are seeing a huge decline in the labor force in China, and the world. We are working on replacing the human labor force with humanoid robots even though there is a long way to go. The second reason is that we believe humanoid robots are the most technically challenging of all robot forms. By working on humanoid robots, we can also use this technology to solve problems on other robot forms, such as quadruped robots, robotic arms, and even wheeled robots. The third reason is that Xiaomi wants to be the most technically advanced company in China, and humanoid robots are sexy.

Why did you choose drumming to demonstrate your research?

Ren: After the official release of Xiaomi CyberOne on August 11, we got a lot of feedback from the public who didn’t have a background in robotics. They are more interested in seeing humanoid robots doing things that humans cannot easily do. Honestly speaking, it’s pretty difficult to find such scenarios, since we know that the first prototype of CyberOne is far behind humans.

But one day, one of our engineers who had just begun to play drums suggested that drumming may be an exception. She thought that compared to rookie drummers, humanoid robots have more advantages in hand-foot coordinated motion and rhythmic control. We all thought it was a good idea, and drumming itself is super cool and interesting. So we choose drumming to demonstrate our research.

What was the most challenging part of this research?

Ren: The most challenging part of this research was that when receiving the long sequences of drum beats, CyberOne needs to assign sequences to each arm and leg and generate continuous collision-free whole-body trajectories within the hardware constraints. So, we extract the basic beats and build our drum beat motion trajectory library offline by optimization. Then, CyberOne can generate continuous trajectories consistent with any drum score. This approach gives more freedom to CyberOne playing drums, and is only limited by the robotics capability.

What different things do you hope that this research will help your robot do in the future?

Ren: Drumming requires CyberOne to coordinate whole-body motions to achieve a fast, accurate, and large range of movement. We first want to find the limit of our robot in terms of hardware and software to provide a reference for the next-generation design. Also, through this research, we have formed a complete set of automatic drumming methods for robots to perform different songs, and this experience also helps us to more quickly realize the development of other musical instruments to be played by robots.

What are you working on next?

Ren: We are working on the second generation of CyberOne, and hope to further improve its locomotion and manipulation ability. On the hardware level, we plan to add more degrees of freedom, integrate self-developed dexterous hands, and add more sensors. On the software level, more robust control algorithms for locomotion and vision will be developed.

Pages