Feed aggregator

The Sony Aibo has been the most sophisticated home robot that you can buy for an astonishing 20 years. The first Aibo went on sale in 1999, and even though there was a dozen year-long gap between 2005’s ERS-7 and the latest ERS-1000, there was really no successful consumer robot over that intervening time that seriously challenged the Aibo.

Part of what made Aibo special was how open Sony was user customization and programmability. Aibo served as the RoboCup Standard Platform for a decade, providing an accessible hardware platform that leveled the playing field for robotic soccer. Designed to stand up to the rigors of use by unsupervised consumers (and, presumably, their kids), Aibo offered both durability and versatility that compared fairly well to later, much more expensive robots like Nao

Aibo ERS-1000: The newest model

The newest Aibo, the ERS-1000, was announced in late 2017 and is now available for US $2,900 in the United States and 198,000 yen in Japan. It’s faithful to the Aibo family, while benefiting from years of progress in robotics hardware and software. However, it wasn’t until last November that Sony opened up Aibo to programmers, by providing visual programming tools as well as access to an API (application programming interface). And over the holidays, Sony lent us an Aibo to try it out for ourselves.

This is not (I repeat not) an Aibo review: I’m not going to talk about how cute it is, how to feed it, how to teach it to play fetch, how weird it is that it pretends to pee sometimes, or how it feels to have it all snuggled up in your lap while you’re working at your computer. Instead, I’m going to talk about how to (metaphorically) rip it open and access its guts to get it to do exactly what you want.

Photo: Evan Ackerman/IEEE Spectrum The newest Aibo, the ERS-1000, was announced in late 2017 and is now available for US $2,900 in the United States and 198,000 yen in Japan.

As you read this, please keep in mind that I’m not much of a software engineer—my expertise extends about as far as Visual Basic, because as far as I’m concerned that’s the only programming language anyone needs to know. My experience here is that of someone who understands (in the abstract) how programming works, and who is willing to read documentation and ask for help, but I’m still very much a beginner at this. Fortunately, Sony has my back. For some of it, anyway.

Getting started with Aibo’s visual programming

The first thing to know about Sony’s approach to Aibo programming is that you don’t have access to everything. We’ll get into this more later, but in general, Aibo’s “personality” is completely protected and cannot be modified:

When you execute the program, Aibo has the freedom to decide which specific behavior to execute depending on his/her psychological state. The API respects Aibo's feelings so that you can enjoy programming while Aibo stays true to himself/herself.

This is a tricky thing for Sony, since each Aibo “evolves” its own unique personality, which is part of the appeal. Running a program on Aibo risks very obviously turning it from an autonomous entity into a mindless robot slave, so Sony has to be careful to maintain Aibo’s defining traits while still allowing you to customize its behavior. The compromise that they came up with is mostly effective, and when Aibo runs a program, it doesn’t disable its autonomous behaviors but rather adds the behaviors you’ve created to the existing ones. 

Aibo’s visual programming system is based on Scratch. If you’ve never used Scratch, that’s fine, because it’s a brilliantly easy and intuitive visual language to use, even for non-coders. Sony didn’t develop it—it’s a project out of MIT, and while it was originally designed for children, it’s great for adults who don’t have coding experience. Rather than having to type in code, Scratch is based around colorful blocks that graphically represent functions. The blocks are different shapes, and only fit together in a way that will yield a working bit of code. Variables appear in handy little drop-down menus, and you can just drag and drop different blocks to build as many programs as you want. You can even read through the code directly, and it’ll explain what it does in a way that makes intuitive sense, more or less:

Screenshot: Evan Ackerman/IEEE Spectrum A sample Aibo visual program from Sony.

Despite the simplicity of the visual programming language, it’s possible to create some fairly complex programs. You have access to control loops like if-then-else and wait-until, and multiple loops can run at the same time. Custom blocks allow you to nest things inside of other things, and you have access to variables and operators. Here’s a program that I put together in just a few minutes to get Aibo to entertain itself by kicking a ball around:

Screenshot: Evan Ackerman/IEEE Spectrum A program I created to make Aibo chase a ball around.

This program directs Aibo to respond to “let’s play” by making some noises and motions, locating and approaching its ball, kicking its ball, and then moving in some random directions before repeating the loop. Petting Aibo on its back will exit the loop.

Programming Aibo: What you can (and can’t) do

It’s a lot of fun to explore all of Aibo’s different behaviors, although if you’re a new user, it does minimize a bit of the magic to see this big long list of everything that Aibo is capable of doing. The granularity of some of commands is a little weird—there’s a command for “gets close to” an object, as well as a command for “gets closer to” an object. And rather than give you direct access to Aibo’s servos to convey emotions or subtle movement cues, you’re instead presented with a bewildering array of very specific options, like:

  • Aibo opens its mouth a little and closes it
  • Aibo has an “I get it” look
  • Aibo gives a high five with its right front paw
  • Aibo faces to the left petulantly
  • Aibo has a dream of becoming a human being and runs about

Unfortunately, there’s no way to “animate” Aibo directly—you don’t have servo-level control, and unlike many (if not most) programmable robots, Sony hasn’t provided a way for users to move Aibo’s servos and then have the robot play back those motions, which would have been simple and effective.

Running one of these programs can be a little frustrating at times, because there’s no indication of when (or if) Aibo transitions from its autonomous behavior to your program—you just run the program and then wait. Sony advises you to start each program with a command that puts Aibo’s autonomy on hold, but depending on what Aibo is in the middle of doing when you run your program, it may take it a little bit to finish its current behavior. My solution for this was to start each program with a sneeze command to let me know when things were actually running. This worked well enough I guess, but it’s not ideal, because sometimes Aibo sneezes by itself.

Running one of these programs can be a little frustrating at times, because there’s no indication of when (or if) Aibo transitions from its autonomous behavior to your program. My solution for this was to start each program with a sneeze command to let me know when things were actually running.

The biggest restriction of the visual programming tool is that as far as I can tell there’s no direct method of getting information back from Aibo—you can’t easily query the internal state of the robot. For example, if you want to know how much battery charge Aibo has, there’s a sensing block for that, but the best you seem to be able to do is have Aibo do specific things in response to the value of that block, like yap a set number of times to communicate what its charge is. More generally, however, it can be tough to write more interactive programs, because it’s hard to tell when, if, why, or how such programs are failing. From what I can tell, there’s no way “step” through your program, or to see which commands are being executed when, making it very hard to debug anything complicated. And this is where the API comes in handy, since it does give you explicit information back.

Aibo API: How it works

There’s a vast chasm between the Aibo visual programming language and the API. Or at least, that’s how I felt about it. The visual programming is simple and friendly, but the API just tosses you straight into the deep end of the programming pool. The good news is that the majority of the stuff that the API allows you to do can also be done visually, but there are a few things that make the API worth having a crack at, if you’re willing to put the work in.

The first step to working with the Aibo API is to get a token, which is sort of like an access password for your Sony Aibo account. There are instructions about how to do this that are clear enough, because it just involves clicking one single button. Step two is finding your Aibo’s unique device ID, and I found myself immediately out of my comfort zone with Sony’s code example of how to do that:

$ curl -X GET https://public.api.aibo.com/v1/devices \
-H "Authorization:Bearer ${accessToken}" 

As it turns out, “curl” (or cURL) is a common command line tool for sending and receiving data via various network protocols, and it’s free and included with Windows. I found my copy in C:\Windows\System32. Being able to paste my token directly into that bit of sample code and have it work would have been too easy—after a whole bunch of futzing around, I figured out that (in Windows) you need to explicitly call “curl.exe” in the command line and that you have to replace “${accessToken}” with your access token, as opposed to just the bit that says “accessToken.” This sort of thing may be super obvious to many people, but it wasn’t to me, and with the exception of some sample code and a reasonable amount of parameter-specific documentation, Sony itself offers very little hand-holding. But since figuring this stuff out is my job, on we go!

Image: Sony How the Aibo API works: Your computer doesn’t talk directly to your robot. Instead, data flows between your computer and Sony’s cloud-based servers, and from the cloud to your robot. 

I don’t have a huge amount of experience with APIs (read: almost none), but the way that the Aibo API works seems a little clunky. As far as I can tell, everything runs through Sony’s Aibo server, which completely isolates you from the Aibo itself. As an example, let’s say we want to figure out how much battery Aibo has left. Rather than just sending a query to the robot and getting a response, we instead have to ask the Aibo server to ask Aibo, and then (separately) ask the Aibo server what Aibo’s response was. Literally, the process is to send an “Execute HungryStatus” command, which returns an execution ID, and then in a second command you request the result of that execution ID, which returns the value of HungryStatus. Weirdly, HungryStatus is not a percentage or a time remaining, but rather a string that goes from “famished” (battery too low to move) to “hungry” (needs to charge) to “enough” (charged enough to move). It’s a slightly strange combination of allowing you to get deep into Aibo’s guts while seeming trying to avoid revealing that there’s a robot under there.

Screenshot: Evan Ackerman/IEEE Spectrum Example of the code required to determine Aibo’s charge. (I blurred areas showing my Aibo’s device ID and token.)

Anyway, back to the API. I think most of the unique API functionality is related to Aibo’s state—how much is Aibo charged, how sleepy is Aibo, what is Aibo perceiving, where is Aibo being touched, that sort of thing. And even then, you can kludge together ways of figuring out what’s going on in Aibo’s lil’ head if you try hard enough with the visual programming, like by turning battery state into some number of yaps.

But the API does also offer a few features that can’t be easily replicated through visual programming. Among other things, you have access to useful information like which specific voice commands Aibo is responding to and exactly where (what angle) those commands are coming from, along with estimates of distance and direction to objects that Aibo recognizes. Really, though, the value of the API for advanced users is the potential of being able to have other bits of software interact directly with Aibo.

API possibilities, and limitations

For folks who are much better at programming than I am, the Aibo API does offer the potential to hook in other services. A programming expert I consulted suggested that it would be fairly straightforward to set things up so that (for example) Aibo would bark every time someone sends you a tweet. Doing this would require writing a Python script and hosting it somewhere in the cloud, which is beyond the scope of this review, but not at all beyond the scope of a programmer with modest skills and experience, I would imagine.

Fundamentally, the API means that just about anything can be used to send commands to Aibo, and the level of control that you have could even give Aibo a way to interact with other robots. It would just be nice if it was a little bit simpler, and a little more integrated, since there are some significant limitations worth mentioning.

For example, you have only indirect access to the majority of Aibo’s sensors, like the camera. Aibo will visually recognize a few specific objects, or a general “person,” but you can’t add new objects or differentiate between people (although Aibo can do this as part of its patrol feature). You can’t command Aibo to take a picture. Aibo can’t make noises that aren’t in its existing repertoire, and there’s no way to program custom motions. You also can’t access any of Aibo’s mapping data, or command it to go to specific places. It’s unfortunate that many of the features that justify Aibo’s cost, and differentiate it from something that’s more of a toy, aren’t accessible to developers at this point.

Photo: Evan Ackerman/IEEE Spectrum Aibo’s API gives users access to, among other things, specific voice commands the robot is responding to and exactly where (what angle) those commands are coming from, along with estimates of distance and direction to objects that Aibo recognizes. Aibo’s programmability: The future

Overall, I appreciate the approach that Sony took with Aibo’s programmability, making it accessible to both absolute beginners as well as more experienced developers looking to link Aibo to other products and services. I haven’t yet seen any particularly compelling examples of folks leveraging this capability with Aibo, but the API has only been publicly available for a month or two. I would have liked to have seen more sample programs from Sony, especially more complex visual programs, and I would have really appreciated a gentler transition over to the API. Hopefully, both of these things can be addressed in the near future.

There’s a reluctance on Sony’s part to give users more control over Aibo. Some of that may be technical, and some of it may be privacy-related, but there are also omissions of functionality and limitations that don’t seem to make sense. I wonder if Sony is worried about risking an otherwise careful compromise between a robot that maintains its unique personality, and a robot that can be customized to do whatever you want it to do. As it stands, Sony is still in control of how Aibo moves, and how Aibo expresses emotions, which keeps the robot’s behavior consistent, even if it’s executing behaviors that you tell it to. 

At this point, I’m not sure that the Aibo API is full-featured and powerful enough to justify buying an Aibo purely for its developer potential, especially given the cost of the robot. If you already have an Aibo, you should definitely play with the new programming functions, because they’re free. I do feel like this is a significant step in a very positive direction for Sony, showing that they’re willing to commit resources to the nascent Aibo developer community, and I’m very much looking forward to seeing how Aibo’s capabilities continue to grow.

Photo: Evan Ackerman/IEEE Spectrum Aibo deserves a rest!

Thanks to Sony for lending us an Aibo unit for the purposes of this review. I named it Aibo, and I will miss its blue eyes. And special thanks to Kevin Finn for spending part of his holiday break helping me figure out how Aibo’s API works. If you need help with your Aibo, or help from a professional software engineer on any number of other things, you can find him here.

[ Aibo Developer Site ]

In recent years the field of soft robotics has gained a lot of interest both in academia and industry. In contrast to rigid robots, which are potentially very powerful and precise, soft robots are composed of compliant materials like gels or elastomers (Rich et al., 2018; Majidi, 2019). Their exclusive composition of nearly entirely soft materials offers the potential to extend the use of robotics to fields like healthcare (Burgner-Kahrs et al., 2015; Banerjee et al., 2018) and advance the emerging domain of cooperative human-machine interaction (Asbeck et al., 2014). One material class used frequently in soft robotics as actuators are electroactive polymers (EAPs). Especially dielectric elastomer actuators (DEAs) consisting of a thin elastomer membrane sandwiched between two compliant electrodes offer promising characteristics for actuator drives (Pelrine et al., 2000). Under an applied electric field, the resulting electrostatic pressure leads to a reduction in thickness and an expansion in the free spatial directions. The resulting expansion can reach strain levels of more than 300% (Bar-Cohen, 2004). This paper presents a bioinspired worm-like crawling robot based on DEAs with additional textile reinforcement in its silicone structures. A special focus is set on the developed cylindrical actuator segments that act as linear actuators.

This paper presents a three-layered hybrid collision avoidance (COLAV) system for autonomous surface vehicles, compliant with rules 8 and 13–17 of the International Regulations for Preventing Collisions at Sea (COLREGs). The COLAV system consists of a high-level planner producing an energy-optimized trajectory, a model-predictive-control-based mid-level COLAV algorithm considering moving obstacles and the COLREGs, and the branching-course model predictive control algorithm for short-term COLAV handling emergency situations in accordance with the COLREGs. Previously developed algorithms by the authors are used for the high-level planner and short-term COLAV, while we in this paper further develop the mid-level algorithm to make it comply with COLREGs rules 13–17. This includes developing a state machine for classifying obstacle vessels using a combination of the geometrical situation, the distance and time to the closest point of approach (CPA) and a new CPA-like measure. The performance of the hybrid COLAV system is tested through numerical simulations for three scenarios representing a range of different challenges, including multi-obstacle situations with multiple simultaneously active COLREGs rules, and also obstacles ignoring the COLREGs. The COLAV system avoids collision in all the scenarios, and follows the energy-optimized trajectory when the obstacles do not interfere with it.

While direct local communication is very important for the organization of robot swarms, so far it has mostly been used for relatively simple tasks such as signaling robots preferences or states. Inspired by the emergence of meaning found in natural languages, more complex communication skills could allow robot swarms to tackle novel situations in ways that may not be a priori obvious to the experimenter. This would pave the way for the design of robot swarms with higher autonomy and adaptivity. The state of the art regarding the emergence of communication for robot swarms has mostly focused on offline evolutionary approaches, which showed that signaling and communication can emerge spontaneously even when not explicitly promoted. However, these approaches do not lead to complex, language-like communication skills, and signals are tightly linked to environmental and/or sensory-motor states that are specific to the task for which communication was evolved. To move beyond current practice, we advocate an approach to emergent communication in robot swarms based on language games. Thanks to language games, previous studies showed that cultural self-organization—rather than biological evolution—can be responsible for the complexity and expressive power of language. We suggest that swarm robotics can be an ideal test-bed to advance research on the emergence of language-like communication. The latter can be key to provide robot swarms with additional skills to support self-organization and adaptivity, enabling the design of more complex collective behaviors.

Daily human activity is characterized by a broad variety of movement tasks. This work summarizes the sagittal hip, knee, and ankle joint biomechanics for a broad range of daily movements, based on previously published literature, to identify requirements for robotic design. Maximum joint power, moment, angular velocity, and angular acceleration, as well as the movement-related range of motion and the mean absolute power were extracted, compared, and analyzed for essential and sportive movement tasks. We found that the full human range of motion is required to mimic human like performance and versatility. In general, sportive movements were found to exhibit the highest joint requirements in angular velocity, angular acceleration, moment, power, and mean absolute power. However, at the hip, essential movements, such as recovery, had comparable or even higher requirements. Further, we found that the moment and power demands were generally higher in stance, while the angular velocity and angular acceleration were mostly higher or equal in swing compared to stance for locomotion tasks. The extracted requirements provide a novel comprehensive overview that can help with the dimensioning of actuators enabling tailored assistance or rehabilitation for wearable lower limb robots, and to achieve essential, sportive or augmented performances that exceed natural human capabilities with humanoid robots.

Telerobotics aims to transfer human manipulation skills and dexterity over an arbitrary distance and at an arbitrary scale to a remote workplace. A telerobotic system that is transparent enables a natural and intuitive interaction. We postulate that embodiment (with three sub-components: sense of ownership, agency, and self-location) of the robotic system leads to optimal perceptual transparency and increases task performance. However, this has not yet been investigated directly. We reason along four premises and present findings from the literature that substantiate each of them: (1) the brain can embody non-bodily objects (e.g., robotic hands), (2) embodiment can be elicited with mediated sensorimotor interaction, (3) embodiment is robust against inconsistencies between the robotic system and the operator's body, and (4) embodiment positively correlates to dexterous task performance. We use the predictive encoding theory as a framework to interpret and discuss the results reported in the literature. Numerous previous studies have shown that it is possible to induce embodiment over a wide range of virtual and real extracorporeal objects (including artificial limbs, avatars, and android robots) through mediated sensorimotor interaction. Also, embodiment can occur for non-human morphologies including for elongated arms and a tail. In accordance with the predictive encoding theory, none of the sensory modalities is critical in establishing ownership, and discrepancies in multisensory signals do not necessarily lead to loss of embodiment. However, large discrepancies in terms of multisensory synchrony or visual likeness can prohibit embodiment from occurring. The literature provides less extensive support for the link between embodiment and (dexterous) task performance. However, data gathered with prosthetic hands do indicate a positive correlation. We conclude that all four premises are supported by direct or indirect evidence in the literature, suggesting that embodiment of a remote manipulator may improve dexterous performance in telerobotics. This warrants further implementation testing of embodiment in telerobotics. We formulate a first set of guidelines to apply embodiment in telerobotics and identify some important research topics.

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA HRI 2020 – March 23-26, 2020 – Cambridge, U.K. ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores ICRA 2020 – May 31-4, 2020 – Paris, France ICUAS 2020 – June 9-12, 2020 – Athens, Greece CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

Since Honda decided to stop further development of the beloved robot Asimo, attention has turned to other companies building advanced humanoids. One of them is UBTECH, which appears to be making steady progress with its Walker robot. At CES early this year, the company showed Walker pushing a cart, pouring a drink, standing on one foot, and even bending its body backward like a yogi.

We had such an amazing time at CES 2020 showing you the major upgrades we’ve made to Walker. With improved flexibility, stability, precision, and speed, Walker has come a long way since its initial debut at CES a few years back.

Walker is an intelligent Humanoid Service Robot designed with outstanding hardware, excellent motion ability and AI interactive performance – the most advanced robot UBTECH has ever created.

But UBTECH wasn’t done. It also demoed its service robot Cruzr and indoor inspection robot AIMBOT.

Cruzr, UBTECH’s enterprise service robot, was on full display at CES 2020!

Cruzr is a cloud-based intelligent humanoid robot that provides a new generation of service for a variety of industrial applications. Cruzr helps enhance and personalize the guest experience in consumer facing establishments such as retail, financial institutions, and hospitality.

AT CES 2020, we showcased AIMBOT, an autonomous indoor monitoring robot. AIMBOT is used for intelligent and accurate indoor inspection, efficient inventory management, visitor verification, preventing safety hazards and more.

UBTECH ]

Generating complex movements in redundant robots like humanoids is usually done by means of multi-task controllers based on quadratic programming, where a multitude of tasks is organized according to strict or soft priorities.

Time-consuming tuning and expertise are required to choose suitable task priorities, and to optimize their gains.

Here, we automatically learn the controller configuration (soft and strict task priorities and Convergence Gains), looking for solutions that track a variety of desired task trajectories efficiently while preserving the robot’s balance.

We use multi-objective optimization to compare and choose among Pareto-optimal solutions that represent a trade-off of performance and robustness and can be transferred onto the real robot.

We experimentally validate our method by learning a control configuration for the iCub humanoid, to perform different whole-body tasks, such as picking up objects, reaching and opening doors.

[ Larsen/Inria ]

This week, roboticist and comedian Naomi Fitter wrote a fantastic guest post on her experiences with robot comedy. Here’s one of the performances she’s created, with her Nao humanoid talking and singing with comedian Sarah Hagen.

Sketch comedy duo act including the talented human/comedian Sarah Hagen and the Oregon State University SHARE Lab’s illustrious NAO robot.

[ Naomi Fitter ]

This work is part of Tim Hojnik’s PhD project, a partnership between CSIRO’s Data61 Robotics and Autonomous Systems Group and the Queensland University of Technology.

[ CSIRO ]

Who’s ready for Superbowl LIV!? The Gripper Guys are.

[ Soft Robotics ]

Researchers at the Max Planck Institute for Intelligent Systems in Stuttgart, Germany, have designed and fabricated an untethered microrobot that can slip along either a flat or curved surface in a liquid when exposed to ultrasound waves. Its propulsion force is two to three orders of magnitude stronger than the propulsion force of natural microorganisms such as bacteria or algae. Additionally, it can transport cargo while swimming. The acoustically propelled robot hence has significant potential to revolutionize the future minimally invasive treatment of patients.

[ Max Planck Institute for Intelligent Systems ]

Did you know Kuka have a giant linear robot? Now you do!

The three-axis linear robot KR 80L has Cartesian axes which are operated via the robot controller. The development of the new KR 80L benefited greatly from KUKA experience gained from many handling applications and our expertise as one of the leading suppliers of intelligent automation solutions.

The modular design allows workspaces from 0.75m³ up to 225m³ to be implemented, making the KUKA linear robot a safe investment for your automation. Minimal interference contours below the robot mean that it is ideally suited for linking work processes by carrying out loading and unloading, palletizing, handling or transfer tasks, for example. The use of proven, series-produced robotic drive components ensures utmost performance and reliability.

[ Kuka ]

Apparently Promobot brought one of its humanoids to New York City’s Bryant Park to help screen people for the coronavirus. NYC officers promptly ejected the robot from the park for lacking a permit, but not before a little robot dance party. 

[ Promobot ] via [ NY Post ]

LOVOT, which we’ve featured on our Robot Gift Guide, is very cute—at least when it has its furry skin on.

Unfortunately we don’t speak Japanese to understand the full presentation, but we applaud the fact that the company is willing to discuss—and show—what’s inside the robot. Given the high rate of consumer robot failures, more sharing and transparency could really help the industry.

[ Robot Start ]

Drones have the potential to change the African continent by revolutionizing the way deliveries are made, blood samples are processed, farmers grow their crops and more. To tackle the many challenges faced by Africa, the World Bank and partners convened the African Drone Forum in Kigali, Rwanda, from February 5-7, 2020. To welcome the audience of engineers, scientists, entrepreneurs, development experts and regulators, the World Bank and ADF team created this video.

[ African Drone Forum ]

We continue to scale our fully driverless experience -- with no one behind the wheel -- for our early riders in Metro Phoenix. We invited Arizona football legend Larry Fitzgerald to take a ride with our Waymo Driver. Watch all of Larry’s reactions in this video of his full, unedited ride.

[ Waymo ]

The humanoid Robot ARMAR-6 grasps unknown objects in a cluttered box autonomously.

[ H2T KIT ]

Quanser R&D engineers have been testing different bumper designs and materials to protect the QCar in collisions. This is a scale-speed equivalent of 120km/hr!

[ Quanser ]

Drone sales have exploded in the past few years, filling the air with millions of new aircraft. Simple modifications to these drones by criminals and terrorists have left people, privacy and physical and intellectual property totally exposed.

Fortem Technologies innovates to stay ahead of the threat, keeping pace with escalating drone threats worldwide.

With more than 3,650 captures at various attack vectors and speeds, DroneHunter is the leading, world-class interceptor drone.

[ Fortem Technologies ] via [ Engadget ]

This is an interesting application of collaborative robots at this car bumper manufacturer, where they mounted industrial cameras on FANUC cobots to perform visual quality-control checks. These visual inspections happen throughout the assembly line, with the robots operating right next to the human workers.

Discovering the many benefits a FANUC collaborative robot solution can provide.

Flex-N-Gate, a supplier of bumpers, exterior trim, lighting, chassis assemblies and other automotive products, uses inspection systems at their Ventra Ionia, Michigan plant to ensure product quality.

To help improve these processes, reduce costs and save floor space, Flex-N-Gate turned to FANUC for a collaborative robot solution, leveraging FANUC America’s 24/7/365 service network to support their cobot systems for a completely successful integration.

[ FANUC ]

In this video we present results on autonomous subterranean exploration inside an abandoned underground mine using the ANYmal legged robot. ANYmal is utilizing the proposed Graph-based Exploration Path Planner which ensures the efficient exploration of the complex underground environment, while simultaneously avoiding obstacles and respecting traversability constraints.

The designed planner first operates by engaging its local exploration mode with which guides the robot to explore along a mine corridor. When the system reaches a local dead-end, the global planning layer of the method is engaged and provides a new path to guide the robot towards a selected frontier of the explored space. The robot is thus re-positioned to this frontier and upon arrival the local planning mode is enabled again in order to enable the continuation of the exploration mission. Finally, provided a time budget for the mission, the global planner identifies the point that the robot must be commanded to return-to-home and provides an associated reference path. The presented mission is completely autonomous.

[ Robotic Systems Lab ]

Do all Roborock vacuums rock? Vacuum vlog Vacuum Wars did some extensive vacuuming tests to find out.

After testing and reviewing all of the robot vacuums Roborock has released so far, I think its time for me to do a big comparison video showing the differences their various models as well as choosing my favorite Roborock models in 3 different categories.

[ Vacuum Wars ]

Highlights from Lex Fridman’s interview with Jim Keller on Tesla, Elon Musk, Autopilot, and more.

Jim Keller is a legendary microprocessor engineer, having worked at AMD, Apple, Tesla, and now Intel. He’s known for his work on the AMD K7, K8, K12 and Zen microarchitectures, Apple A4, A5 processors, and co-author of the specifications for the x86-64 instruction set and HyperTransport interconnect.

[ Lex Fridman ]

Take a trip down the microworld as roboticists Paul McEuen and Marc Miskin explain how they design and mass-produce microrobots the size of a single cell, powered by atomically thin legs -- and show how these machines could one day be "piloted" to battle crop diseases or study your brain at the level of individual neurons.

[ TED Talks ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA HRI 2020 – March 23-26, 2020 – Cambridge, U.K. ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores ICRA 2020 – May 31-4, 2020 – Paris, France ICUAS 2020 – June 9-12, 2020 – Athens, Greece CLAWAR 2020 – August 24-26, 2020 – Moscow, Russia

Let us know if you have suggestions for next week, and enjoy today’s videos.

Since Honda decided to stop further development of the beloved robot Asimo, attention has turned to other companies building advanced humanoids. One of them is UBTECH, which appears to be making steady progress with its Walker robot. At CES early this year, the company showed Walker pushing a cart, pouring a drink, standing on one foot, and even bending its body backward like a yogi.

We had such an amazing time at CES 2020 showing you the major upgrades we’ve made to Walker. With improved flexibility, stability, precision, and speed, Walker has come a long way since its initial debut at CES a few years back.

Walker is an intelligent Humanoid Service Robot designed with outstanding hardware, excellent motion ability and AI interactive performance – the most advanced robot UBTECH has ever created.

But UBTECH wasn’t done. It also demoed its service robot Cruzr and indoor inspection robot AIMBOT.

Cruzr, UBTECH’s enterprise service robot, was on full display at CES 2020!

Cruzr is a cloud-based intelligent humanoid robot that provides a new generation of service for a variety of industrial applications. Cruzr helps enhance and personalize the guest experience in consumer facing establishments such as retail, financial institutions, and hospitality.

AT CES 2020, we showcased AIMBOT, an autonomous indoor monitoring robot. AIMBOT is used for intelligent and accurate indoor inspection, efficient inventory management, visitor verification, preventing safety hazards and more.

UBTECH ]

Generating complex movements in redundant robots like humanoids is usually done by means of multi-task controllers based on quadratic programming, where a multitude of tasks is organized according to strict or soft priorities.

Time-consuming tuning and expertise are required to choose suitable task priorities, and to optimize their gains.

Here, we automatically learn the controller configuration (soft and strict task priorities and Convergence Gains), looking for solutions that track a variety of desired task trajectories efficiently while preserving the robot’s balance.

We use multi-objective optimization to compare and choose among Pareto-optimal solutions that represent a trade-off of performance and robustness and can be transferred onto the real robot.

We experimentally validate our method by learning a control configuration for the iCub humanoid, to perform different whole-body tasks, such as picking up objects, reaching and opening doors.

[ Larsen/Inria ]

This week, roboticist and comedian Naomi Fitter wrote a fantastic guest post on her experiences with robot comedy. Here’s one of the performances she’s created, with her Nao humanoid talking and singing with comedian Sarah Hagen.

Sketch comedy duo act including the talented human/comedian Sarah Hagen and the Oregon State University SHARE Lab’s illustrious NAO robot.

[ Naomi Fitter ]

This work is part of Tim Hojnik’s PhD project, a partnership between CSIRO’s Data61 Robotics and Autonomous Systems Group and the Queensland University of Technology.

[ CSIRO ]

Who’s ready for Superbowl LIV!? The Gripper Guys are.

[ Soft Robotics ]

Researchers at the Max Planck Institute for Intelligent Systems in Stuttgart, Germany, have designed and fabricated an untethered microrobot that can slip along either a flat or curved surface in a liquid when exposed to ultrasound waves. Its propulsion force is two to three orders of magnitude stronger than the propulsion force of natural microorganisms such as bacteria or algae. Additionally, it can transport cargo while swimming. The acoustically propelled robot hence has significant potential to revolutionize the future minimally invasive treatment of patients.

[ Max Planck Institute for Intelligent Systems ]

Did you know Kuka have a giant linear robot? Now you do!

The three-axis linear robot KR 80L has Cartesian axes which are operated via the robot controller. The development of the new KR 80L benefited greatly from KUKA experience gained from many handling applications and our expertise as one of the leading suppliers of intelligent automation solutions.

The modular design allows workspaces from 0.75m³ up to 225m³ to be implemented, making the KUKA linear robot a safe investment for your automation. Minimal interference contours below the robot mean that it is ideally suited for linking work processes by carrying out loading and unloading, palletizing, handling or transfer tasks, for example. The use of proven, series-produced robotic drive components ensures utmost performance and reliability.

[ Kuka ]

Apparently Promobot brought one of its humanoids to New York City’s Bryant Park to help screen people for the coronavirus. NYC officers promptly ejected the robot from the park for lacking a permit, but not before a little robot dance party. 

[ Promobot ] via [ NY Post ]

LOVOT, which we’ve featured on our Robot Gift Guide, is very cute—at least when it has its furry skin on.

Unfortunately we don’t speak Japanese to understand the full presentation, but we applaud the fact that the company is willing to discuss—and show—what’s inside the robot. Given the high rate of consumer robot failures, more sharing and transparency could really help the industry.

[ Robot Start ]

Drones have the potential to change the African continent by revolutionizing the way deliveries are made, blood samples are processed, farmers grow their crops and more. To tackle the many challenges faced by Africa, the World Bank and partners convened the African Drone Forum in Kigali, Rwanda, from February 5-7, 2020. To welcome the audience of engineers, scientists, entrepreneurs, development experts and regulators, the World Bank and ADF team created this video.

[ African Drone Forum ]

We continue to scale our fully driverless experience -- with no one behind the wheel -- for our early riders in Metro Phoenix. We invited Arizona football legend Larry Fitzgerald to take a ride with our Waymo Driver. Watch all of Larry’s reactions in this video of his full, unedited ride.

[ Waymo ]

The humanoid Robot ARMAR-6 grasps unknown objects in a cluttered box autonomously.

[ H2T KIT ]

Quanser R&D engineers have been testing different bumper designs and materials to protect the QCar in collisions. This is a scale-speed equivalent of 120km/hr!

[ Quanser ]

Drone sales have exploded in the past few years, filling the air with millions of new aircraft. Simple modifications to these drones by criminals and terrorists have left people, privacy and physical and intellectual property totally exposed.

Fortem Technologies innovates to stay ahead of the threat, keeping pace with escalating drone threats worldwide.

With more than 3,650 captures at various attack vectors and speeds, DroneHunter is the leading, world-class interceptor drone.

[ Fortem Technologies ] via [ Engadget ]

This is an interesting application of collaborative robots at this car bumper manufacturer, where they mounted industrial cameras on FANUC cobots to perform visual quality-control checks. These visual inspections happen throughout the assembly line, with the robots operating right next to the human workers.

Discovering the many benefits a FANUC collaborative robot solution can provide.

Flex-N-Gate, a supplier of bumpers, exterior trim, lighting, chassis assemblies and other automotive products, uses inspection systems at their Ventra Ionia, Michigan plant to ensure product quality.

To help improve these processes, reduce costs and save floor space, Flex-N-Gate turned to FANUC for a collaborative robot solution, leveraging FANUC America’s 24/7/365 service network to support their cobot systems for a completely successful integration.

[ FANUC ]

In this video we present results on autonomous subterranean exploration inside an abandoned underground mine using the ANYmal legged robot. ANYmal is utilizing the proposed Graph-based Exploration Path Planner which ensures the efficient exploration of the complex underground environment, while simultaneously avoiding obstacles and respecting traversability constraints.

The designed planner first operates by engaging its local exploration mode with which guides the robot to explore along a mine corridor. When the system reaches a local dead-end, the global planning layer of the method is engaged and provides a new path to guide the robot towards a selected frontier of the explored space. The robot is thus re-positioned to this frontier and upon arrival the local planning mode is enabled again in order to enable the continuation of the exploration mission. Finally, provided a time budget for the mission, the global planner identifies the point that the robot must be commanded to return-to-home and provides an associated reference path. The presented mission is completely autonomous.

[ Robotic Systems Lab ]

Do all Roborock vacuums rock? Vacuum vlog Vacuum Wars did some extensive vacuuming tests to find out.

After testing and reviewing all of the robot vacuums Roborock has released so far, I think its time for me to do a big comparison video showing the differences their various models as well as choosing my favorite Roborock models in 3 different categories.

[ Vacuum Wars ]

Highlights from Lex Fridman’s interview with Jim Keller on Tesla, Elon Musk, Autopilot, and more.

Jim Keller is a legendary microprocessor engineer, having worked at AMD, Apple, Tesla, and now Intel. He’s known for his work on the AMD K7, K8, K12 and Zen microarchitectures, Apple A4, A5 processors, and co-author of the specifications for the x86-64 instruction set and HyperTransport interconnect.

[ Lex Fridman ]

Take a trip down the microworld as roboticists Paul McEuen and Marc Miskin explain how they design and mass-produce microrobots the size of a single cell, powered by atomically thin legs -- and show how these machines could one day be "piloted" to battle crop diseases or study your brain at the level of individual neurons.

[ TED Talks ]

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

In my mythical free time outside of professorhood, I’m a stand-up comedian and improviser. As a comedian, I’ve often found myself wishing I could banter with modern commercial AI assistants. They don’t have enough comedic skills for my taste! This longing for cheeky AI eventually led me to study autonomous robot comedians, and to teach my own robot how to perform stand-up.

I’ve been fascinated with the relationship between comedy and AI even before I started doing comedy on my own in 2013. When I moved to Los Angeles in 2017 as a postdoctoral scholar for the USC Interaction Lab, I began performing in roughly two booked comedy shows per week, and I found myself with too good of an opportunity for putting a robot onstage to pass up. 

Programming a NAO robot for stand-up comedy is complicated. Some joke concepts came easily, but most were challenging to evoke. It can be tricky to write original comedy for a robot since robots have been part of television and cinema for quite some time. Despite this legacy, we wanted to come up with a perspective for the robot that was fresh and not derivative.

Another challenge was that in my human stand-up comedy, I write almost entirely from real-life experience, and I’ve never been a robot! I tried different thought exercises—imagining myself to be a robot with different annoyances, likes, dislikes, and “life” experiences. My improv comedy training with the Upright Citizens Brigade started to come in handy, as I could play-act being a robot, map classic (and even somewhat overdone) human jokes to fit robot experiences, and imagine things like, “What is a robot family?”, “What is a robot relationship like?”, and “What are drugs for a robot?”

Text-to-speech researchers would probably be astounded by the mounds of SSML that we wrote to get the robot to clearly pronounce phrases that humans have almost certainly never said, such as “I want to backpropagate all over your hidden layers”

As a robotics professor, you never quite know how thousands of dollars of improv classes will come into play in your professional life until they suddenly do! Along the way, I sought inspiration and premises from my comedy colleagues (especially fellow computer scientist/comedian Ajitesh Srivastava), although (at least for now) the robot’s final material is all written by myself and my husband, John. Early in our writing process, we made the awkward misstep of naming the robot Jon as well, and now when people ask how John’s doing, sometimes I don’t know which entity they’re talking about.

Searching for a voice for Jon was also a bit of a puzzle. We found the built-in NAO voice to be too childlike, and many modern text-to-speech voices to be too human-like for the character we were aiming to create. We sought an alternative that was distinctly robotic while still comprehensible, settling on Amazon Polly. Text-to-speech researchers would probably be astounded by the mounds of SSML (Speech Synthesis Markup Language) that we wrote to get the robot to clearly pronounce phrases that humans (or at least humans in the training dataset) have almost certainly never said, such as “I want to backpropagate all over your hidden layers” or “My only solace is re-reading Sheryl Sand-bot’s hit book, ‘Dial In.’” For now, we hand-engineered the SSML and also hand-selected robot movements to layer over each joke. Some efforts have been made by the robotics and NLP communities to automate these types of processes, but I don’t know of any foolproof solution—yet! 

During the first two performances of the robot, I encountered several cases in which the audience could not clearly hear the setup of a joke when they laughed long enough at the previous joke. This lapse in audibility is a big impediment to “getting the joke.” One way to address this problem is to lengthen the pause after each joke:

As shown in the video, this option is workable, but falls short of deftly-timed robot comedy. Luckily, my humble studio apartment contained a full battery of background noises and two expert human laughers. My husband and I modulated all aspects of apartment background noise, cued up laugh tracks, and laughed enthusiastically in search of a sensing strategy that would let the robot pause when it heard uproarious laughter, and then carry on once the crowd calmed down. The resulting audio processing tactic involved counting the number of sounds in each ~0.2-second period after the joke and watching for a moving average-filtered version of this signal to drop below an experimentally-determined threshold.

Human comics not only vie for their jokes to be heard over audience laughter, but they also read the room and adapt to joke success and failure. For maximal entertainment, we wanted our robot to be able to do this, too. By summing the laughter signal described above over the most intense 1 second of the post-joke response, we were able to obtain rudimentary estimates of joke success based on thresholding and filtering the audio signal. This experimental strategy was workable but not perfect; its joke ratings matched labels from a human rater about 60 percent of the time and were judged as different but acceptable an additional 15 percent of the time. The robot used its joke success judgements to decide between possible celebratory or reconciliatory follow-on jokes. Even when the strategy was failing, the robot produced behavior that seemed genuinely sarcastic, which the audience loved.

By this point, we were fairly sure that robot timing and adaptiveness of spoken sequences were important to comedic effectiveness, but we didn’t have any actual empirical evidence of this. As I stepped into my current role as an assistant professor at Oregon State University, it was the perfect time to design an experiment and begin gathering data! We recorded audio from 32 performances of Jon the Robot at comedy venues in Corvallis and Los Angeles, and began to crunch the numbers.

Our results showed that a robot with good timing was significantly funnier–a good confirmation of what the comedy community already expected. Adaptivity actually didn’t make the robot funnier over the course of a full performance, but it did improve the audience’s initial response to jokes about 80 percent of the time.

While this research was certainly fun to conduct, there were also some challenges and missteps along the way. One (half serious/half silly) problem was that we designed the robot to have a male voice, and as soon as I brought it to the heavily male-dominated local comedy scene, the robot quickly began to get more offers of stage time than I did. This felt like a careless oversight on my part—my own male-voiced robot was taking away my stage time! (Or sometimes I gave it up to Jon the Robot, for the sake of data.)

Some individual crowd members mildly heckled the robot. One audience member angrily left the performance, grumbling at the robot to “write your own jokes.” 

All of the robot’s audiences were very receptive, but some individual crowd members mildly heckled the robot. Because of our carefully-crafted writing, most of these hecklers were eventually won over by the robot’s active evaluation of the crowd, but a few weren’t. One audience member angrily left the performance, grumbling directly at the robot to “write your own jokes.”  While all of Jon’s jokes are original material, the robot doesn’t know how to generate its own comedy—at least, not that we’re ready to tell you about yet.

Writing comedy material for robots, especially as a roboticist myself, also can feel like a bit of a minefield. It’s easy to get people to laugh at quips about robot takeovers, and robot jokes that are R-rated are also reliably funny, if not particularly creative. Getting the attendees of a performance to learn something about robotics while also enjoying themselves is of great interest to me as a robotics professor, but comedy shows can lose momentum if they turn too instructional. My current approach to writing material for shows includes a bit of all of the above concepts—in the end, simply getting people to genuinely laugh is a great triumph. 

Hopefully by now you’re excited about robot comedy! If so, you’re in luck– Jon the Robot performs quarterly in Corvallis, Ore., and is going on tour, starting with the ACM/IEEE International Conference on Human-Robot Interaction this year in Cambridge, U.K. And trust me—there’s nothing like “live”—er, well, “physically embodied”—robot comedy!

Naomi Fitter is an assistant professor in the Collaborative Robotics and Intelligent Systems (CoRIS) Institute at Oregon State University, where her Social Haptics, Assistive Robotics, and Embodiment (SHARE) research group aims to equip robots with the ability to engage and empower people in interactions from playful high-fives to challenging physical therapy routines. She completed her doctoral work in the GRASP Laboratory’s Haptics Group and was a postdoctoral scholar in the University of Southern California’s Interaction Lab from 2017 to 2018. Naomi’s not-so-secret pastime is performing stand-up and improv comedy.

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

In my mythical free time outside of professorhood, I’m a stand-up comedian and improviser. As a comedian, I’ve often found myself wishing I could banter with modern commercial AI assistants. They don’t have enough comedic skills for my taste! This longing for cheeky AI eventually led me to study autonomous robot comedians, and to teach my own robot how to perform stand-up.

I’ve been fascinated with the relationship between comedy and AI even before I started doing comedy on my own in 2013. When I moved to Los Angeles in 2017 as a postdoctoral scholar for the USC Interaction Lab, I began performing in roughly two booked comedy shows per week, and I found myself with too good of an opportunity for putting a robot onstage to pass up. 

Programming a NAO robot for stand-up comedy is complicated. Some joke concepts came easily, but most were challenging to evoke. It can be tricky to write original comedy for a robot since robots have been part of television and cinema for quite some time. Despite this legacy, we wanted to come up with a perspective for the robot that was fresh and not derivative.

Another challenge was that in my human stand-up comedy, I write almost entirely from real-life experience, and I’ve never been a robot! I tried different thought exercises—imagining myself to be a robot with different annoyances, likes, dislikes, and “life” experiences. My improv comedy training with the Upright Citizens Brigade started to come in handy, as I could play-act being a robot, map classic (and even somewhat overdone) human jokes to fit robot experiences, and imagine things like, “What is a robot family?”, “What is a robot relationship like?”, and “What are drugs for a robot?”

Text-to-speech researchers would probably be astounded by the mounds of SSML that we wrote to get the robot to clearly pronounce phrases that humans have almost certainly never said, such as “I want to backpropagate all over your hidden layers”

As a robotics professor, you never quite know how thousands of dollars of improv classes will come into play in your professional life until they suddenly do! Along the way, I sought inspiration and premises from my comedy colleagues (especially fellow computer scientist/comedian Ajitesh Srivastava), although (at least for now) the robot’s final material is all written by myself and my husband, John. Early in our writing process, we made the awkward misstep of naming the robot Jon as well, and now when people ask how John’s doing, sometimes I don’t know which entity they’re talking about.

Searching for a voice for Jon was also a bit of a puzzle. We found the built-in NAO voice to be too childlike, and many modern text-to-speech voices to be too human-like for the character we were aiming to create. We sought an alternative that was distinctly robotic while still comprehensible, settling on Amazon Polly. Text-to-speech researchers would probably be astounded by the mounds of SSML (Speech Synthesis Markup Language) that we wrote to get the robot to clearly pronounce phrases that humans (or at least humans in the training dataset) have almost certainly never said, such as “I want to backpropagate all over your hidden layers” or “My only solace is re-reading Sheryl Sand-bot’s hit book, ‘Dial In.’” For now, we hand-engineered the SSML and also hand-selected robot movements to layer over each joke. Some efforts have been made by the robotics and NLP communities to automate these types of processes, but I don’t know of any foolproof solution—yet! 

During the first two performances of the robot, I encountered several cases in which the audience could not clearly hear the setup of a joke when they laughed long enough at the previous joke. This lapse in audibility is a big impediment to “getting the joke.” One way to address this problem is to lengthen the pause after each joke:

As shown in the video, this option is workable, but falls short of deftly-timed robot comedy. Luckily, my humble studio apartment contained a full battery of background noises and two expert human laughers. My husband and I modulated all aspects of apartment background noise, cued up laugh tracks, and laughed enthusiastically in search of a sensing strategy that would let the robot pause when it heard uproarious laughter, and then carry on once the crowd calmed down. The resulting audio processing tactic involved counting the number of sounds in each ~0.2-second period after the joke and watching for a moving average-filtered version of this signal to drop below an experimentally-determined threshold.

Human comics not only vie for their jokes to be heard over audience laughter, but they also read the room and adapt to joke success and failure. For maximal entertainment, we wanted our robot to be able to do this, too. By summing the laughter signal described above over the most intense 1 second of the post-joke response, we were able to obtain rudimentary estimates of joke success based on thresholding and filtering the audio signal. This experimental strategy was workable but not perfect; its joke ratings matched labels from a human rater about 60 percent of the time and were judged as different but acceptable an additional 15 percent of the time. The robot used its joke success judgements to decide between possible celebratory or reconciliatory follow-on jokes. Even when the strategy was failing, the robot produced behavior that seemed genuinely sarcastic, which the audience loved.

By this point, we were fairly sure that robot timing and adaptiveness of spoken sequences were important to comedic effectiveness, but we didn’t have any actual empirical evidence of this. As I stepped into my current role as an assistant professor at Oregon State University, it was the perfect time to design an experiment and begin gathering data! We recorded audio from 32 performances of Jon the Robot at comedy venues in Corvallis and Los Angeles, and began to crunch the numbers.

Our results showed that a robot with good timing was significantly funnier–a good confirmation of what the comedy community already expected. Adaptivity actually didn’t make the robot funnier over the course of a full performance, but it did improve the audience’s initial response to jokes about 80 percent of the time.

While this research was certainly fun to conduct, there were also some challenges and missteps along the way. One (half serious/half silly) problem was that we designed the robot to have a male voice, and as soon as I brought it to the heavily male-dominated local comedy scene, the robot quickly began to get more offers of stage time than I did. This felt like a careless oversight on my part—my own male-voiced robot was taking away my stage time! (Or sometimes I gave it up to Jon the Robot, for the sake of data.)

Some individual crowd members mildly heckled the robot. One audience member angrily left the performance, grumbling at the robot to “write your own jokes.” 

All of the robot’s audiences were very receptive, but some individual crowd members mildly heckled the robot. Because of our carefully-crafted writing, most of these hecklers were eventually won over by the robot’s active evaluation of the crowd, but a few weren’t. One audience member angrily left the performance, grumbling directly at the robot to “write your own jokes.”  While all of Jon’s jokes are original material, the robot doesn’t know how to generate its own comedy—at least, not that we’re ready to tell you about yet.

Writing comedy material for robots, especially as a roboticist myself, also can feel like a bit of a minefield. It’s easy to get people to laugh at quips about robot takeovers, and robot jokes that are R-rated are also reliably funny, if not particularly creative. Getting the attendees of a performance to learn something about robotics while also enjoying themselves is of great interest to me as a robotics professor, but comedy shows can lose momentum if they turn too instructional. My current approach to writing material for shows includes a bit of all of the above concepts—in the end, simply getting people to genuinely laugh is a great triumph. 

Hopefully by now you’re excited about robot comedy! If so, you’re in luck– Jon the Robot performs quarterly in Corvallis, Ore., and is going on tour, starting with the ACM/IEEE International Conference on Human-Robot Interaction this year in Cambridge, U.K. And trust me—there’s nothing like “live”—er, well, “physically embodied”—robot comedy!

Naomi Fitter is an assistant professor in the Collaborative Robotics and Intelligent Systems (CoRIS) Institute at Oregon State University, where her Social Haptics, Assistive Robotics, and Embodiment (SHARE) research group aims to equip robots with the ability to engage and empower people in interactions from playful high-fives to challenging physical therapy routines. She completed her doctoral work in the GRASP Laboratory’s Haptics Group and was a postdoctoral scholar in the University of Southern California’s Interaction Lab from 2017 to 2018. Naomi’s not-so-secret pastime is performing stand-up and improv comedy.

We’ve all seen drone displays—massive swarms of tiny drones, each carrying a light, that swarm together in carefully choreographed patterns to form giant (albeit very low resolution) 3D shapes in the sky at night. It’s cool, but it’s not particularly novel anymore, and without thousands of drones, the amount of detail that you can expect out of the display is not all that great.

CollMot Entertainment, a Hungarian company that puts on traditional drone shows, has been working on something a little bit different. Instead of using drones as pixels, they’ve developed a system that uses drones to generate an enormous screen in the sky, and then laser projectors draw on that screen to create “the largest 3D display you have ever seen.”

The video appears to show an array of drones carrying smoke generators, which collectively create a backdrop that can reflect laser light that’s projected from the ground. CollMot, based in Budapest, collaborated with German companies Phase 7 and LaserAnimation Sollinger to jointly develop the technology. They want to keep the details under wraps for now, but we got some additional information from Csilla Vitályos, head of business development at CollMot.

IEEE Spectrum: Can you describe what the “drone-laser technology” is and how the system operates?

Drone-laser technology is a special combination of our drone swarms and a ground based or aerial laser. The intelligent drone swarm creates a giant canvas in the air with uniquely controlled smoke machines and real-time active swarm control. The laser projects onto this special aerial smoke canvas, creating the largest 2D and 3D laser displays ever seen.

What exactly are we seeing in the video?

This video shows how much more we can visualize with such technology compared to individual light dots represented by standard drone shows. The footage was taken on one of our tests out in the field, producing shiny 3D laser images of around 50 to 150 meters in width up in the air.

Image: CollMot Entertainment

What are the technical specifications of the system?

We work with a drone fleet of 10 to 50 special intelligent drones to accomplish such a production, which can last for several minutes and can contain very detailed custom visuals. Creating a stable visual without proper technology and experience is very challenging as there are several environmental parameters that affect the results. We have put a lot of time and energy into our experiments lately to find the best solutions for such holographic-like aerial displays.

What is unique about this system, and what can it do that other drone display technologies can’t?

The most stunning difference compared to standard drone shows (what we actually also provide and also like a lot) is that while in usual drone light shows each drone is a single pixel on the sky, here we can visualize colorful lines and curves as well. A point is zero dimensional, a line is one dimensional. Try to draw something with a limited number of points and try to do the same with lines. You will experience the difference immediately.

Can you share anything else about the system?

At this point we would like to keep the drone-related technical details as part of our secret formula but we are more than happy to present our technology’s scope of application at events in the future.

[ CollMot ]

We’ve all seen drone displays—massive swarms of tiny drones, each carrying a light, that swarm together in carefully choreographed patterns to form giant (albeit very low resolution) 3D shapes in the sky at night. It’s cool, but it’s not particularly novel anymore, and without thousands of drones, the amount of detail that you can expect out of the display is not all that great.

CollMot Entertainment, a Hungarian company that puts on traditional drone shows, has been working on something a little bit different. Instead of using drones as pixels, they’ve developed a system that uses drones to generate an enormous screen in the sky, and then laser projectors draw on that screen to create “the largest 3D display you have ever seen.”

The video appears to show an array of drones carrying smoke generators, which collectively create a backdrop that can reflect laser light that’s projected from the ground. CollMot, based in Budapest, collaborated with German companies Phase 7 and LaserAnimation Sollinger to jointly develop the technology. They want to keep the details under wraps for now, but we got some additional information from Csilla Vitályos, head of business development at CollMot.

IEEE Spectrum: Can you describe what the “drone-laser technology” is and how the system operates?

Drone-laser technology is a special combination of our drone swarms and a ground based or aerial laser. The intelligent drone swarm creates a giant canvas in the air with uniquely controlled smoke machines and real-time active swarm control. The laser projects onto this special aerial smoke canvas, creating the largest 2D and 3D laser displays ever seen.

What exactly are we seeing in the video?

This video shows how much more we can visualize with such technology compared to individual light dots represented by standard drone shows. The footage was taken on one of our tests out in the field, producing shiny 3D laser images of around 50 to 150 meters in width up in the air.

Image: CollMot Entertainment

What are the technical specifications of the system?

We work with a drone fleet of 10 to 50 special intelligent drones to accomplish such a production, which can last for several minutes and can contain very detailed custom visuals. Creating a stable visual without proper technology and experience is very challenging as there are several environmental parameters that affect the results. We have put a lot of time and energy into our experiments lately to find the best solutions for such holographic-like aerial displays.

What is unique about this system, and what can it do that other drone display technologies can’t?

The most stunning difference compared to standard drone shows (what we actually also provide and also like a lot) is that while in usual drone light shows each drone is a single pixel on the sky, here we can visualize colorful lines and curves as well. A point is zero dimensional, a line is one dimensional. Try to draw something with a limited number of points and try to do the same with lines. You will experience the difference immediately.

Can you share anything else about the system?

At this point we would like to keep the drone-related technical details as part of our secret formula but we are more than happy to present our technology’s scope of application at events in the future.

[ CollMot ]

David Zarrouk’s lab at Ben Gurion University, in Israel, is well known for developing creative, highly mobile robots that use a minimal number of actuators. Their latest robot is called RCTR (Reconfigurable Continuous Track Robot), and it manages to change its entire body shape on a link-by-link basis, using just one extra actuator to “build its own track in the air as it advances.”

The concept behind this robot is similar to Zarrouk’s reconfigurable robotic arm, which we wrote about a few years ago. That arm is made up of a bunch of links that are attached to each other through passive joints, and a little robotic module can travel across those links and adjust the angle of each joint separately to reconfigure the arm. 

Image: Ben Gurion University The robot’s locking mechanism (located in the front of the robot’s body) can lock the track links at a 20° angle (A) or a straight angle (B), or it can keep the track links unlocked (C).

RCTR takes this idea and flips it around, so that instead of an actuator moving along a bunch of flexible links, you have a bunch of flexible links (the track) moving across an actuator. Each link in the track has a locking pin, and depending on what the actuator is set to when that link moves across it, the locking pin can be engaged such that the following link gets fixed at a relative angle of either zero degrees or 20 degrees. It’s this ability to lock the links of the track—turning the robot from flexible to stiff—that allows RCTR to rear up to pass over an obstacle, and do the other stuff that you can see in the video. And to keep the robot from fighting against its own tracks, the rear of the robot has a passive system that disengages the locking pins on every link to reset the flexibility of the track as it passes over the top. 

The biggest downside to this robot is that it’s not able to, uh, steer. Adding steering wouldn’t be particularly difficult, although it would mean a hardware redesign: the simplest solution is likely to do what most other tracked vehicles do, and use a pair of tracks and skid-steering, although you could also attach two modules front to back with a powered hinge between them. The researchers are also working on a locomotion planning algorithm for handling a variety of terrain, presumably by working out the best combination of rigid and flexible links to apply to different obstacles.

“A Minimally Actuated Reconfigurable Continuous Track Robot,” by Tal Kislassi and David Zarrouk from Ben Gurion University in Israel, is published in IEEE Robotics and Automation Letters.

[ RA-L ] via [ BGU ]

David Zarrouk’s lab at Ben Gurion University, in Israel, is well known for developing creative, highly mobile robots that use a minimal number of actuators. Their latest robot is called RCTR (Reconfigurable Continuous Track Robot), and it manages to change its entire body shape on a link-by-link basis, using just one extra actuator to “build its own track in the air as it advances.”

The concept behind this robot is similar to Zarrouk’s reconfigurable robotic arm, which we wrote about a few years ago. That arm is made up of a bunch of links that are attached to each other through passive joints, and a little robotic module can travel across those links and adjust the angle of each joint separately to reconfigure the arm. 

Image: Ben Gurion University The robot’s locking mechanism (located in the front of the robot’s body) can lock the track links at a 20° angle (A) or a straight angle (B), or it can keep the track links unlocked (C).

RCTR takes this idea and flips it around, so that instead of an actuator moving along a bunch of flexible links, you have a bunch of flexible links (the track) moving across an actuator. Each link in the track has a locking pin, and depending on what the actuator is set to when that link moves across it, the locking pin can be engaged such that the following link gets fixed at a relative angle of either zero degrees or 20 degrees. It’s this ability to lock the links of the track—turning the robot from flexible to stiff—that allows RCTR to rear up to pass over an obstacle, and do the other stuff that you can see in the video. And to keep the robot from fighting against its own tracks, the rear of the robot has a passive system that disengages the locking pins on every link to reset the flexibility of the track as it passes over the top. 

The biggest downside to this robot is that it’s not able to, uh, steer. Adding steering wouldn’t be particularly difficult, although it would mean a hardware redesign: the simplest solution is likely to do what most other tracked vehicles do, and use a pair of tracks and skid-steering, although you could also attach two modules front to back with a powered hinge between them. The researchers are also working on a locomotion planning algorithm for handling a variety of terrain, presumably by working out the best combination of rigid and flexible links to apply to different obstacles.

“A Minimally Actuated Reconfigurable Continuous Track Robot,” by Tal Kislassi and David Zarrouk from Ben Gurion University in Israel, is published in IEEE Robotics and Automation Letters.

[ RA-L ] via [ BGU ]

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA HRI 2020 – March 23-26, 2020 – Cambridge, U.K. ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores ICRA 2020 – May 31-4, 2020 – Paris, France

Let us know if you have suggestions for next week, and enjoy today’s videos.

Automaton contributor Fan Shi, who helps with our coverage of robotics in Asia, shared a few videos from China showing ways in which robots might be useful to help combat the spread of the deadly coronavirus. These include using robots to deliver medicine, food, and disinfect rooms.

And according to some reports, doctors at a Seattle area hospital are using a telepresence robot to treat a man infected with the virus, the first confirmed case of coronavirus in the United States.

Watch until 0:44 to get your mind blown by MiniCheetah.

[ MIT ]

This new video from Logistics Gliders shows more footage of how these disposable cargo UAVs land. It’s not pretty, but it’s very cost effective.

[ Logistics Gliders ]

Thanks Marti!

At the KUKA Innovation Award 2019 about 30 research teams from all over the world applied with their concepts on the topic of Healthy Living. The applicants were asked to develop an innovative concept using the KUKA LBR Med for the use in hospitals and rehabilitation centers. At MEDICA, the world's largest medical fair, the teams of the 5 finalists presented their innovative applications.

[ Kuka ]

Unlike most dogs, I think Aibo is cuter with transparent skin.

[ Aibo ] via [ RobotStart ]

We’ve written extensively about Realtime Robotics, and here’s their motion-planning software running on a couple of collision-prone picking robots at IREX.

[ Realtime Robotics ] via [ sbbit ]

Tech United is already looking hard to beat for RoboCup 2020.

[ Tech United ]

In its third field experiment, DARPA's OFFensive Swarm-Enabled Tactics (OFFSET) program deployed swarms of autonomous air and ground vehicles to demonstrate a raid in an urban area. The field experiment took place at the Combined Arms Collective Training Facility (CACTF) at the Camp Shelby Joint Forces Training Center in Mississippi.

The OFFSET program envisions swarms of up to 250 collaborative autonomous systems providing critical insights to small ground units in urban areas where limited sight lines and tight spaces can obscure hazards, as well as constrain mobility and communications.

[ DARPA ]

Looks like one of Morgan Pope’s robotic acrobats is suiting up for Disney:

[ Disney ] via [ Gizmodo ]

Here are some brief video highlights of the more unusual robots that were on display at IREX—including faceless robot baby Hiro-chan—from Japanese tech journalist Kazumichi Moriyama.

[ sbbit ]

The Oxford Dynamic Robot Systems Group has six papers at ICRA this year, and they’ve put together this teaser video.

[ DRS ]

Pepper and NAO had a busy 2019:

[ Softbank ]

Let’s talk about science! Watch the fourth episode of our #EZScience series to learn about NASA’s upcoming Mars 2020 rover mission by looking back at the Mars Pathfinder mission and Sojourner rover. Discover the innovative elements of Mars 2020 (including a small solar-powered helicopter!) and what we hope to learn about the Red Planet when our new rover arrives in February 2021.

[ NASA ]

Chen Li from JHU gave a talk about how snakes climb stairs, which is an important thing to know.

[ LCSR ]

This week’s CMU RI Seminar comes from Hadas Kress-Gazit at Cornell, on “Formal Synthesis for Robots.”

In this talk I will describe how formal methods such as synthesis – automatically creating a system from a formal specification – can be leveraged to design robots, explain and provide guarantees for their behavior, and even identify skills they might be missing. I will discuss the benefits and challenges of synthesis techniques and will give examples of different robotic systems including modular robots, swarms and robots interacting with people.

[ CMU RI ]

Back to IEEE COVID-19 Resources

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA HRI 2020 – March 23-26, 2020 – Cambridge, U.K. ICARSC 2020 – April 15-17, 2020 – Ponta Delgada, Azores ICRA 2020 – May 31-4, 2020 – Paris, France

Let us know if you have suggestions for next week, and enjoy today’s videos.

Automaton contributor Fan Shi, who helps with our coverage of robotics in Asia, shared a few videos from China showing ways in which robots might be useful to help combat the spread of the deadly coronavirus. These include using robots to deliver medicine, food, and disinfect rooms.

And according to some reports, doctors at a Seattle area hospital are using a telepresence robot to treat a man infected with the virus, the first confirmed case of coronavirus in the United States.

Watch until 0:44 to get your mind blown by MiniCheetah.

[ MIT ]

This new video from Logistics Gliders shows more footage of how these disposable cargo UAVs land. It’s not pretty, but it’s very cost effective.

[ Logistics Gliders ]

Thanks Marti!

At the KUKA Innovation Award 2019 about 30 research teams from all over the world applied with their concepts on the topic of Healthy Living. The applicants were asked to develop an innovative concept using the KUKA LBR Med for the use in hospitals and rehabilitation centers. At MEDICA, the world's largest medical fair, the teams of the 5 finalists presented their innovative applications.

[ Kuka ]

Unlike most dogs, I think Aibo is cuter with transparent skin.

[ Aibo ] via [ RobotStart ]

We’ve written extensively about Realtime Robotics, and here’s their motion-planning software running on a couple of collision-prone picking robots at IREX.

[ Realtime Robotics ] via [ sbbit ]

Tech United is already looking hard to beat for RoboCup 2020.

[ Tech United ]

In its third field experiment, DARPA's OFFensive Swarm-Enabled Tactics (OFFSET) program deployed swarms of autonomous air and ground vehicles to demonstrate a raid in an urban area. The field experiment took place at the Combined Arms Collective Training Facility (CACTF) at the Camp Shelby Joint Forces Training Center in Mississippi.

The OFFSET program envisions swarms of up to 250 collaborative autonomous systems providing critical insights to small ground units in urban areas where limited sight lines and tight spaces can obscure hazards, as well as constrain mobility and communications.

[ DARPA ]

Looks like one of Morgan Pope’s robotic acrobats is suiting up for Disney:

[ Disney ] via [ Gizmodo ]

Here are some brief video highlights of the more unusual robots that were on display at IREX—including faceless robot baby Hiro-chan—from Japanese tech journalist Kazumichi Moriyama.

[ sbbit ]

The Oxford Dynamic Robot Systems Group has six papers at ICRA this year, and they’ve put together this teaser video.

[ DRS ]

Pepper and NAO had a busy 2019:

[ Softbank ]

Let’s talk about science! Watch the fourth episode of our #EZScience series to learn about NASA’s upcoming Mars 2020 rover mission by looking back at the Mars Pathfinder mission and Sojourner rover. Discover the innovative elements of Mars 2020 (including a small solar-powered helicopter!) and what we hope to learn about the Red Planet when our new rover arrives in February 2021.

[ NASA ]

Chen Li from JHU gave a talk about how snakes climb stairs, which is an important thing to know.

[ LCSR ]

This week’s CMU RI Seminar comes from Hadas Kress-Gazit at Cornell, on “Formal Synthesis for Robots.”

In this talk I will describe how formal methods such as synthesis – automatically creating a system from a formal specification – can be leveraged to design robots, explain and provide guarantees for their behavior, and even identify skills they might be missing. I will discuss the benefits and challenges of synthesis techniques and will give examples of different robotic systems including modular robots, swarms and robots interacting with people.

[ CMU RI ]

Back to IEEE COVID-19 Resources

Drones of all sorts are getting smaller and cheaper, and that’s great—it makes them more accessible to everyone, and opens up new use cases for which big expensive drones would be, you know, too big and expensive. The problem with very small drones, particularly those with fixed-wing designs, is that they tend to be inefficient fliers, and are very susceptible to wind gusts as well as air turbulence caused by objects that they might be flying close to. Unfortunately, designing for resilience and designing for efficiency are two different things: Efficient wings are long and thin, and resilient wings are short and fat. You can’t really do both at the same time, but that’s okay, because if you tried to make long and thin wings for micro aerial vehicles (MAVs) they’d likely just snap off. So stubby wings it is!

In a paper published this week in Science Robotics, researchers from Brown University and EPFL are presenting a new wing design that’s able to deliver both highly efficient flight and robustness to turbulence at the same time. A prototype 100-gram MAV using this wing design can fly for nearly 3 hours, which is four times longer than similar drones with conventional wings. How did they come up with a wing design that offered such a massive improvement? Well, they didn’t— they stole it, from birds.

Conventional airfoils work best when you have airflow that “sticks” to the wing over as much of the wing surface as possible. When flow over an airfoil separates from the surface of the wing, it leads to a bunch of turbulence over the wing and a loss of lift. Aircraft wings employ all kinds of tricks to minimize flow separation, like leading edge extensions and vortex generators. Flow separation can lead to abrupt changes in lift, to loss of control, and to stalls. Flow separation is bad.

For many large insects and small birds, though, flow separation is just how they roll. In fact,  many small birds have wing features that have evolved specifically to cause flow separation right at the leading edge of the wing. Why would you want that if flow separation is bad? It turns out that flow separation is mostly bad for traditional airfoil designs, where it can be unpredictable and difficult to manage. But if you design a wing around flow separation, controlling where it happens and how the resulting turbulent flow over the wing is managed, things aren’t so bad. Actually, things can be pretty good. Since most of your wing is in turbulent airflow all the time, it’s highly resistant to any other turbulent air that your MAV might be flying through, which is a big problem for tiny outdoor fliers.

Image: Brown/EPFL/Science Robotics Photo of the MAV with the top surface of the wing removed to show how batteries and electronics are integrated inside. A diagram (bottom) shows the section of the bio-inspired airfoil, indicating how the flow separates at the sharp leading edge, transitions to turbulence, and reattaches over the flap.

In the MAV demonstrator created by the researchers, the wing (or SFA, for separated flow airfoil) is completely flat, like a piece of plywood, and the square front causes flow separation right at the leading edge of the wing. There’s an area of separated, turbulent flow over the front half of the wing, and then a rounded flap that hangs off the trailing edge of the wing pulls the flow back down again as air moving over the plate speeds up to pass over the flap. 

You may have noticed that there’s an area over the front 40 percent of the wing where the flow has separated (called a “separation bubble”), lowering lift efficiency over that section of the wing. This does mean that the maximum aerodynamic efficiency of the SFA is somewhat lower than you can get with a more conventional airfoil, where separation bubbles are avoided and more of the wing generates lift. However, the SFA design more than makes up for this with its wing aspect ratio—the ratio of wing length to wing width. Low aspect ratio wings are short and fat, while high aspect ratio wings are long and thin, and the higher the aspect ratio, the more efficient the wing is.

The SFA MAV has wings with an aspect ratio of 6, while similarly sized MAVs have wings with aspect ratios of between 1 and 2.5. Since lift-to-drag ratio increases with aspect ratio, that makes a huge difference to efficiency. In general, you tend to see those stubby low aspect ratio wings on MAVs because it’s difficult to structurally support long, thin, high aspect ratio wings on small platforms. But since the SFA MAV has no use for the conventional aerodynamics of traditional contoured wings, it just uses high aspect ratio wings that are thick enough to support themselves, and this comes with some other benefits. Thick wings can be stuffed full of batteries, and with batteries (and other payload) in the wings, you don’t need a fuselage anymore. With a MAV that’s basically all wing, the propeller in front sends high speed airflow directly over the center section of the wing itself, boosting lift by 20 to 30 percent, which is huge.

The challenge moving forward, say the researchers, is that current modeling tools can’t really handle the complex aerodynamics of the separated flow wing. They’ve been doing experiments in a wind tunnel, but it’s difficult to optimize the design that way. Still, it seems like the potential for consistent, predictable performance even under turbulence, increased efficiency, and being able to stuff a bunch of payload directly into a chunky wing could be very, very useful for the next generation of micro (and nano) air vehicles.

“A bioinspired Separated Flow wing provides turbulence resilience and aerodynamic efficiency for miniature drones,” by Matteo Di Luca, Stefano Mintchev, Yunxing Su, Eric Shaw, and Kenneth Breuer from Brown University and EPFL, appears in Science Robotics.

[ Science Robotics ]

Drones of all sorts are getting smaller and cheaper, and that’s great—it makes them more accessible to everyone, and opens up new use cases for which big expensive drones would be, you know, too big and expensive. The problem with very small drones, particularly those with fixed-wing designs, is that they tend to be inefficient fliers, and are very susceptible to wind gusts as well as air turbulence caused by objects that they might be flying close to. Unfortunately, designing for resilience and designing for efficiency are two different things: Efficient wings are long and thin, and resilient wings are short and fat. You can’t really do both at the same time, but that’s okay, because if you tried to make long and thin wings for micro aerial vehicles (MAVs) they’d likely just snap off. So stubby wings it is!

In a paper published this week in Science Robotics, researchers from Brown University and EPFL are presenting a new wing design that’s able to deliver both highly efficient flight and robustness to turbulence at the same time. A prototype 100-gram MAV using this wing design can fly for nearly 3 hours, which is four times longer than similar drones with conventional wings. How did they come up with a wing design that offered such a massive improvement? Well, they didn’t— they stole it, from birds.

Conventional airfoils work best when you have airflow that “sticks” to the wing over as much of the wing surface as possible. When flow over an airfoil separates from the surface of the wing, it leads to a bunch of turbulence over the wing and a loss of lift. Aircraft wings employ all kinds of tricks to minimize flow separation, like leading edge extensions and vortex generators. Flow separation can lead to abrupt changes in lift, to loss of control, and to stalls. Flow separation is bad.

For many large insects and small birds, though, flow separation is just how they roll. In fact,  many small birds have wing features that have evolved specifically to cause flow separation right at the leading edge of the wing. Why would you want that if flow separation is bad? It turns out that flow separation is mostly bad for traditional airfoil designs, where it can be unpredictable and difficult to manage. But if you design a wing around flow separation, controlling where it happens and how the resulting turbulent flow over the wing is managed, things aren’t so bad. Actually, things can be pretty good. Since most of your wing is in turbulent airflow all the time, it’s highly resistant to any other turbulent air that your MAV might be flying through, which is a big problem for tiny outdoor fliers.

Image: Brown/EPFL/Science Robotics Photo of the MAV with the top surface of the wing removed to show how batteries and electronics are integrated inside. A diagram (bottom) shows the section of the bio-inspired airfoil, indicating how the flow separates at the sharp leading edge, transitions to turbulence, and reattaches over the flap.

In the MAV demonstrator created by the researchers, the wing (or SFA, for separated flow airfoil) is completely flat, like a piece of plywood, and the square front causes flow separation right at the leading edge of the wing. There’s an area of separated, turbulent flow over the front half of the wing, and then a rounded flap that hangs off the trailing edge of the wing pulls the flow back down again as air moving over the plate speeds up to pass over the flap. 

You may have noticed that there’s an area over the front 40 percent of the wing where the flow has separated (called a “separation bubble”), lowering lift efficiency over that section of the wing. This does mean that the maximum aerodynamic efficiency of the SFA is somewhat lower than you can get with a more conventional airfoil, where separation bubbles are avoided and more of the wing generates lift. However, the SFA design more than makes up for this with its wing aspect ratio—the ratio of wing length to wing width. Low aspect ratio wings are short and fat, while high aspect ratio wings are long and thin, and the higher the aspect ratio, the more efficient the wing is.

The SFA MAV has wings with an aspect ratio of 6, while similarly sized MAVs have wings with aspect ratios of between 1 and 2.5. Since lift-to-drag ratio increases with aspect ratio, that makes a huge difference to efficiency. In general, you tend to see those stubby low aspect ratio wings on MAVs because it’s difficult to structurally support long, thin, high aspect ratio wings on small platforms. But since the SFA MAV has no use for the conventional aerodynamics of traditional contoured wings, it just uses high aspect ratio wings that are thick enough to support themselves, and this comes with some other benefits. Thick wings can be stuffed full of batteries, and with batteries (and other payload) in the wings, you don’t need a fuselage anymore. With a MAV that’s basically all wing, the propeller in front sends high speed airflow directly over the center section of the wing itself, boosting lift by 20 to 30 percent, which is huge.

The challenge moving forward, say the researchers, is that current modeling tools can’t really handle the complex aerodynamics of the separated flow wing. They’ve been doing experiments in a wind tunnel, but it’s difficult to optimize the design that way. Still, it seems like the potential for consistent, predictable performance even under turbulence, increased efficiency, and being able to stuff a bunch of payload directly into a chunky wing could be very, very useful for the next generation of micro (and nano) air vehicles.

“A bioinspired Separated Flow wing provides turbulence resilience and aerodynamic efficiency for miniature drones,” by Matteo Di Luca, Stefano Mintchev, Yunxing Su, Eric Shaw, and Kenneth Breuer from Brown University and EPFL, appears in Science Robotics.

[ Science Robotics ]

The facets of autonomous car development that automakers tend to get excited about are things like interpreting sensor data, decision making, and motion planning.

Unfortunately, if you want to make self-driving cars, there’s all kinds of other stuff that you need to get figured out first, and much of it is really difficult but also absolutely critical. Things like, how do you set up a reliable network inside of your vehicle? How do you manage memory and data recording and logging? How do you get your sensors and computers to all talk to each other at the same time? And how do you make sure it’s all stable and safe?

In robotics, the Robot Operating System (ROS) has offered an open-source solution for many of these challenges. ROS provides the groundwork for researchers and companies to build off of, so that they can focus on the specific problems that they’re interested in without having to spend time and money on setting up all that underlying software infrastructure first.

Apex.AI’s Apex OS, which is having its version 1.0 release today, extends this idea from robotics to autonomous cars. It promises to help autonomous carmakers shorten their development timelines, and if it has the same effect on autonomous cars as ROS has had on robotics, it could help accelerate the entire autonomous car industry.

  Image: Apex.AI

For more about what this 1.0 software release offers, we spoke with Apex.AI CEO Jan Becker.

IEEE Spectrum: What exactly can Apex.OS do, and what doesn't it do? 

Jan Becker: Apex.OS is a fork of ROS 2 that has been made robust and reliable so that it can be used for the development and deployment of highly safety-critical systems such as autonomous vehicles, robots, and aerospace applications. Apex.OS is API-compatible to ROS 2. In a  nutshell, Apex.OS is an SDK for autonomous driving software and other safety-critical mobility applications. The components enable customers to focus on building their specific applications without having to worry about message passing, reliable real-time execution, hardware integration, and more.

Apex.OS is not a full [self-driving software] stack. Apex.OS enables customers to build their full stack based on their needs. We have built an automotive-grade 3D point cloud/lidar object detection and tracking component and we are in the process of building a lidar-based localizer, which is available as Apex.Autonomy. In addition, we are starting to work with other algorithmic component suppliers to integrate Apex.OS APIs into their software. These components make use of Apex.OS APIs, but are available separately, which allows customers to assemble a customized full software stack from building blocks such that it exactly fits their needs. The algorithmic components re-use the open architecture which is currently being built in the open source Autoware.Auto project.

So if every autonomous vehicle company started using Apex.OS, those companies would still be able to develop different capabilities?

Apex.OS is an SDK for autonomous driving software and other safety-critical mobility applications. Just like iOS SDK provides an SDK for iPhone app developers enabling them to focus on the application, Apex.OS provides an SDK to developers of safety-critical mobility applications. 

Every autonomous mobility system deployed into a public environment must be safe. We enable customers to focus on their application without having to worry about the safety of the underlying components. Organizations will differentiate themselves through performance, discrete features, and other product capabilities. By adopting Apex.OS, we enable them to focus on developing these differentiators. 

What's the minimum viable vehicle that I could install Apex.OS on and have it drive autonomously? 

In terms of compute hardware, we showed Apex.OS running on a Renesas R-Car H3 and on a Quanta V3NP at CES 2020. The R-Car H3 contains just four ARM Cortex-A57 cores and four ARM Cortex-A53 cores and is the smallest ECU for which our customers have requested support. You can install Apex.OS on much smaller systems, but this is the smallest one we have tested extensively so far, and which is also powering our vehicle.

We are currently adding support for the Renesas R-Car V3H, which contains four ARM Cortex-A53 cores (and no ARM Cortex-A57 cores) and an additional image processing processor. 

You suggest that Apex.OS is also useful for other robots and drones, in addition to autonomous vehicles. Can you describe how Apex.OS would benefit applications in these spaces?

Apex.OS provides a software framework that enables reading, processing, and outputting data on embedded real-time systems used in safety-critical environments. That pertains to robotics and aerospace applications just as much as to automotive applications. We simply started with automotive applications because of the stronger market pull. 

Industrial robots today often run ROS for the perception system and non-ROS embedded controller for highly-accurate position control, because ROS cannot run the realtime controller with the necessary precision. Drones often run PX4 for the autopilot and ROS for the perception stack. Apex.OS combines the capabilities of ROS with the requirements of mobility systems, specifically regarding real-time, reliability and the ability to run on embedded compute systems.

How will Apex contribute back to the open-source ROS 2 ecosystem that it's leveraging within Apex.OS?

We have contributed back to the ROS 2 ecosystem from day one. Any and all bugs that we find in ROS 2 get fixed in ROS 2 and thereby contributed back to the open-source codebase. We also provide a significant amount of funding to Open Robotics to do this. In addition, we are on the ROS 2 Technical Steering Committee to provide input and guidance to make ROS 2 more useful for automotive applications. Overall we have a great deal of interest in improving ROS 2 not only because it increases our customer base, but also because we strive to be a good open-source citizen.

The features we keep in house pertain to making ROS 2 realtime, deterministic, tested, and certified on embedded hardware. Our goals are therefore somewhat orthogonal to the goals of an open-source project aiming to address as many applications as possible. We, therefore, live in a healthy symbiosis with ROS 2. 

[ Apex.AI ]

It’s going to be a very, very long time before robots come anywhere close to matching the power-efficient mobility of animals, especially at small scales. Lots of folks are working on making tiny robots, but another option is to just hijack animals directly, by turning them into cyborgs. We’ve seen this sort of thing before with beetles, but there are many other animals out there that can be cyborgized. Researchers at Stanford and Caltech are giving sea jellies a try, and remarkably, it seems as though cyborg enhancements actually make the jellies more capable than they were before.

Usually, co-opting the mobility system of an animal with electronics doesn’t improve things for the animal, because we’re not nearly as good at controlling animals as they are at controlling themselves. But when you look at animals with very simple control systems, like sea jellies, it turns out that with some carefully targeted stimulation, they can move faster and more efficiently than they do naturally.

The researchers, Nicole W. Xu and John O. Dabiri, chose a friendly sort of sea jelly called Aurelia aurita, which is “an oblate species of jellyfish comprising a flexible mesogleal bell and monolayer of coronal and radial muscles that line the subumbrellar surface,” so there you go. To swim, jellies actuate the muscles in their bells, which squeeze water out and propel them forwards. These muscle contractions are controlled by a relatively simple stimulus of the jelly’s nervous system that can be replicated through external electrical impulses. 

To turn the sea jellies into cyborgs, the researchers developed an implant consisting of a battery, microelectronics, and bits of cork and stainless steel to make things neutrally buoyant, plus a wooden pin, which was used to gently impale each jelly through the bell to hold everything in place. While non-cyborg jellies tended to swim with a bell contraction frequency of 0.25 Hz, the implant allowed the researchers to crank the cyborg jellies up to a swimming frequency of 1 Hz.

While non-cyborg jellies tended to swim with a bell contraction frequency of 0.25 Hz, the implant allowed the researchers to crank the cyborg jellies up to a swimming frequency of 1 Hz

Peak speed was achieved at 0.62 Hz, resulting in the jellies traveling at nearly half a body diameter per second (4-6 centimeters per second), which is 2.8x their typical speed. More importantly, calculating the cost of transport for the jellies showed that the 2.8x increase in speed came with only a 2x increase in metabolic cost, meaning that the cyborg sea jelly is both faster and more efficient.

This is a little bit weird from an evolutionary standpoint—if a sea jelly has the ability to move faster, and moving faster is more efficient for it, then why doesn’t it just move faster all the time? The researchers think it may have something to do with feeding:

A possible explanation for the existence of more proficient and efficient swimming at nonnatural bell contraction frequencies stems from the multipurpose function of vortices shed during swimming. Vortex formation serves not only for locomotion but also to enable filter feeding and reproduction. There may therefore be no evolutionary pressure for A. aurita to use its full propulsive capabilities in nature, and there is apparently no significant cost associated with maintaining those capabilities in a dormant state, although higher speeds might limit the animals’ ability to feed as effectively.

Image: Science Advances

Sea jelly with a swim controller implant consisting of a battery, microelectronics, electrodes, and bits of cork and stainless steel to make things neutrally buoyant. The implant includes a wooden pin that is gently inserted through the jelly’s bell to hold everything in place, with electrodes embedded into the muscle and mesogleal tissue near the bell margin.

The really nice thing about relying on cyborgs instead of robots is that many of the advantages of a living organism are preserved. A cyborg sea jelly is perfectly capable of refueling itself as well as making any necessary repairs to its structure and function. And with an energy efficiency that’s anywhere from 10 to 1000 times more efficient than existing swimming robots, adding a control system and a couple of sensors could potentially lead to a useful biohybrid monitoring system.

Lastly, in case you’re concerned about the welfare of the sea jellies, which I definitely was, the researchers did try to keep them mostly healthy and happy (or at least as happy as an invertebrate with no central nervous system can be), despite stabbing them through the bell with a wooden pin. They were all allowed to take naps (or the sea jelly equivalent) in between experiments, and the bell piercing would heal up after just a couple of days. All animals recovered post-experiments, the researchers say, although a few had “bell deformities” from being cooped up in a rectangular fish tank for too long rather than being returned to their jelliquarium. Also, jelliquariums are a thing and I want one.

You may have noticed that over the course of this article, I have been passive-aggressively using the term “sea jelly” rather than “jellyfish.” This is because jellyfish are not fish at all—you are more closely related to a fish than a jellyfish is, which is why “sea jelly” is the more accurate term that will make marine biologists happy. And just as jellyfish should properly be called sea jellies, starfish should be called sea stars, and cuttlefish should be called sea cuttles. The last one is totally legit, don’t even question it.

“Low-power microelectronics embedded in live jellyfish enhance propulsion,” by Nicole W. Xu and John O. Dabiri from Stanford University and Caltech, is published in Science Advances.

[ Science Advances ]

Pages