Feed aggregator



Drones have the potential to be very useful in disaster scenarios by transporting food and water to people in need. But, whenever you ask a drone to transport anything, anywhere, the bulk of what gets moved is the drone itself. Most delivery drones can only carry about 30 percent of their mass as payload, because most of their mass is both critical, like wings, and comes in the form of things that are essentially useless to the end user, like wings.

At the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) conference in Kyoto last week, researchers from EPFL presented a paper describing a drone that can boost its payload of food from 30 percent to 50 percent of its mass. It does so with the ingenious use of wings made from rice cakes that contain the caloric equivalent of an average, if unbalanced, breakfast. For anyone interested in digesting the paper, it is titled Towards edible drones for rescue missions: design and flight of nutritional wings, by Bokeon Kwak, Jun Shintake, Lu Zhang, and Dario Floreano from EPFL.

The reason why this drone exists is to (work towards) the effective and efficient delivery of food to someone who, for whatever reason, really really needs food and is not in a position to gain access to it in any other way. The idea is that you could fly this drone directly to them and keep them going for an extra day or two. You obviously won’t get the drone back afterwards (because its wings will have been eaten off), but that’s a small price to pay for potentially keeping someone alive via the delivery of vital calories.

The researchers designed the wing of this partially edible drone out of compressed puffed rice (rice cakes or rice cookies depending on who you ask) because of the foodstuff's similarity to expanded polypropylene (EPP) foam. EPP foam is something that’s commonly used as wing material in drones because it’s strong and lightweight; puffed rice shares those qualities. Though it’s not quite as strong as the EPP, it's not bad. And it’s also affordable, accessible, and easy to laser cut. The puffed rice also has a respectable calorie density—at 3,870 kcal per kilogram, rice cakes aren’t as good as something like chocolate, but they’re about on par with pasta, just with a much lower density.

Out of the box, the rice cakes are round, so the first step in fabricating the wing is to laser cut them into hexagons to make them easier to stick together. The glue is just gelatin, and after it all dries, the wing is packaged in plastic and tape to make sure that it doesn’t break down in wet or humid environments. It’s a process that’s fast, simple, and cheap.

The size of the wing is actually driven not by flight requirements, but by nutrition requirements. In this case, a wingspan of about 700 centimeters results in enough rice cake and gelatin glue to deliver 300 kcal, or the equivalent of one breakfast serving, with 80 grams remaining for a payload of vitamins or water or something like that. The formula the researchers came up with to calculate the design of this avian appetite quencher assumes that the rest of the drone is not edible, because it isn’t. The structure and tail surfaces are made of carbon fiber and foam.

While this is just a prototype, the half-edible drone does actually fly, achieving speeds of about 10 meters per second with the addition of a motor, some servos to actuate the tail surfaces for control, and a small battery. The next step is to figure out a way of making as many of those non-edible pieces out of edible materials instead, as well as finding a way of carrying a payload (like water) in an edible container.

For a bit more about this drone, we spoke with first author of the paper, Bokeon Kwak.

IEEE Spectrum: It sounds like your selection of edible wing material was primarily optimized for its mechanical properties and low weight. Are there other options that could work if the goal was to instead optimize for calories while still maintaining functionality?

Kwak: As you pointed out, achieving sufficient mechanical properties while maintaining low weight (with food materials) was the foremost design criteria in designing the edible wing. We can expand the design criteria to contain higher calorie by using fat-based material (e.g., edible wax); Fat has higher calorie per gram than proteins and carbohydrates. On the other hand, containing more calories also implies the increase of structural weight, which is a price we need to pay toward higher calories. This aspect also requires further study to find a sweet spot!

What does the drone taste like?

The edible wing tastes like a crunch rice crisp cookie with a little touch of raw gelatin (which worked as an edible glue to hold the rice cookies as a flat plate shape). No artificial flavor has been added yet.

Would there be any significant advantages to making the wing into a more complex shape, for example with an airfoil cross section instead of a flat plate?

Making a well-streamlined airfoil (instead of flat plate) is actually our next goal to achieve more efficient aerodynamic properties, such as: lower drag, higher lift. These advantages let an edible drone to carry more payload (which is useful to carry water) and have prolonged flight time and distance. Our team is testing 3D food printing and molding to create such an edible wing, including material characterization to make sure the edible wing has sufficient mechanical properties (i.e., higher Young's modulus, low density).

What else will you be working on next?

Other structural components such as wing control surfaces (e.g. aileron, rudder) will be made of edible material by 3D food printing or molding. Other things that will be considered are an edible/water-resistant coating on the surface of the edible wing, and degradation testing of the edible wing upon time (and water exposure).

This drone is just one application of a broader European research initiative called RoboFood, which seeks to develop edible robots that maximizes both performance and nutritional value. Edible sensing, actuation, and computation are all parts of this project, and the researchers (led by Dario Floreano at EPFL) can now start to focus on some of those more challenging edible components.



IROS 2022 took place in Kyoto last week, bringing together thousands of roboticists from around the world to share all the latest awesome research they’ve been working on. We’ve got a bunch of stuff to bring you from the conference, but while we work on that (and recover from some monster jetlag), here are the presentation videos of all of the IROS 2022 award-winning papers. This is the some of the best, most impactful robotics research presented this year. Congratulations to all of the winners!

IROS 2022 Best Paper Award

“SpeedFolding: Learning Efficient Bimanual Folding of Garment,” by Yahav Avigal, Lars Berscheid, Tamim Asfour, Torsten Kroeger, and Ken Goldberg from UC Berkeley and Karlsruhe Institute of Technology.

Read more: https://events.infovaya.com/presentation?id=84508

IROS 2022 Best Student Paper Award – Sponsored by ABB

“FAR Planner: Fast, Attemptable Route Planner Using Dynamic Visibility Update,” by Fan Yang, Chao Cao, Hongbiao Zhu, Jean Oh, and Ji Zhang from Carnegie Mellon University and Harbin Institute of Technology.

Read more: https://events.infovaya.com/presentation?id=84511

IROS Best Paper Award on Cognitive Robotics – Sponsored by KROS

“Gesture2Vec: Clustering Gestures Using Representation Learning Methods for Co-Speech Gesture Generation,” by Payam Jome Yazdian, Mo Chen, and Angelica Lim from Simon Fraser University.

Read more: https://events.infovaya.com/presentation?id=90186

IROS 2022 Best RoboCup Paper Award – Sponsored by RoboCup Federation

“RCareWorld: A Human-centric Simulation World for Caregiving Robots,” by Ruolin Ye, Wenqiang Xu, Haoyuan Fu, Rajat Kumar Jenamani, Vy Nguyen, Cewu Lu, Katherine Dimitropoulou, and Tapomayukh Bhattacharjee from Cornell University, Shanghai Jiaotong University, and Columbia University.

Read more: https://events.infovaya.com/presentation?id=84520

IROS Best Paper Award on Robot Mechanisms and Design – Sponsored by ROBOTIS

“Aerial Grasping and the Velocity Sufficiency Region,” by Tony G. Chen, Kenneth Hoffmann, JunEn Low, Keiko Nagami, David Lentink, and Mark Cutkosky from Stanford University and Wageningen University.

Read more: https://events.infovaya.com/presentation?id=85675

IROS Best Entertainment and Amusement Paper Award – Sponsored by JTCF

“Robot Learning to Paint From Demonstrations,” by Younghyo Park, Seunghun Jeon, and Taeyoon Lee from Seoul National University, KAIST, and Naver Labs.

Read more: https://events.infovaya.com/presentation?id=85681

IROS Best Paper Award on Safety, Security, and Rescue Robotics in memory of Motohiro Kisoi – Sponsored by IRS

“Power-Based Safety Layer for Aerial Vehicles in Physical Interaction Using Lyapunov Exponents,” by Eugenio Cuniato, Nicholas Lawrance, Marco Tognon, and Roland Siegwart from ETH Zurich and CSIRO.

Read more: https://events.infovaya.com/presentation?id=86266

IROS Best Paper Award on Agri-Robotics – Sponsored by YANMAR

“Explicitly Incorporating Spatial Information to Recurrent Networks for Agriculture,” by Claus Smitt, Michael Allan Halstead, Alireza Ahmadi, and Christopher Steven McCool from University of Bonn.

Read more: https://events.infovaya.com/presentation?id=86839

IROS Best Paper Award on Mobile Manipulation – Sponsored by OMRON Sinic X Corp.

“Robot Learning of Mobile Manipulation with Reachability Behavior Priors,” by Snehal Jauhri, Jan Peters, and Georgia Chalvatzaki from TU Darmstadt.

Read more: https://events.infovaya.com/presentation?id=86827

IROS Best Application Paper Award – Sponsored by ICROS

“Soft Tissue Characterisation Using a Novel Robotic Medical Percussion Device with Acoustic Analysis and Neural Network,” by Pilar Zhang Qiu, Yongxuan Tan, Oliver Thompson, Bennet Cobley, and Thrishantha Nanayakkara from Imperial College London.

Read more: https://events.infovaya.com/presentation?id=86287

IROS Best Paper Award for Industrial Robotics Research for Applications – Sponsored by Mujin Inc.

“Absolute Position Detection in 7-Phase Sensorless Electric Stepper Motor,” by Vincent Groenhuis, Gijs Rolff, Koen Bosman, Leon Abelmann, and Stefano Stramigioli from University of Twente, IMS BV, and Eye-on-Air.

Read more: https://events.infovaya.com/presentation?id=85705



IROS 2022 took place in Kyoto last week, bringing together thousands of roboticists from around the world to share all the latest awesome research they’ve been working on. We’ve got a bunch of stuff to bring you from the conference, but while we work on that (and recover from some monster jetlag), here are the presentation videos of all of the IROS 2022 award-winning papers. This is the some of the best, most impactful robotics research presented this year. Congratulations to all of the winners!

IROS 2022 Best Paper Award

“SpeedFolding: Learning Efficient Bimanual Folding of Garment,” by Yahav Avigal, Lars Berscheid, Tamim Asfour, Torsten Kroeger, and Ken Goldberg from UC Berkeley and Karlsruhe Institute of Technology.

Read more: https://events.infovaya.com/presentation?id=84508

IROS 2022 Best Student Paper Award – Sponsored by ABB

“FAR Planner: Fast, Attemptable Route Planner Using Dynamic Visibility Update,” by Fan Yang, Chao Cao, Hongbiao Zhu, Jean Oh, and Ji Zhang from Carnegie Mellon University and Harbin Institute of Technology.

Read more: https://events.infovaya.com/presentation?id=84511

IROS Best Paper Award on Cognitive Robotics – Sponsored by KROS

“Gesture2Vec: Clustering Gestures Using Representation Learning Methods for Co-Speech Gesture Generation,” by Payam Jome Yazdian, Mo Chen, and Angelica Lim from Simon Fraser University.

Read more: https://events.infovaya.com/presentation?id=90186

IROS 2022 Best RoboCup Paper Award – Sponsored by RoboCup Federation

“RCareWorld: A Human-centric Simulation World for Caregiving Robots,” by Ruolin Ye, Wenqiang Xu, Haoyuan Fu, Rajat Kumar Jenamani, Vy Nguyen, Cewu Lu, Katherine Dimitropoulou, and Tapomayukh Bhattacharjee from Cornell University, Shanghai Jiaotong University, and Columbia University.

Read more: https://events.infovaya.com/presentation?id=84520

IROS Best Paper Award on Robot Mechanisms and Design – Sponsored by ROBOTIS

“Aerial Grasping and the Velocity Sufficiency Region,” by Tony G. Chen, Kenneth Hoffmann, JunEn Low, Keiko Nagami, David Lentink, and Mark Cutkosky from Stanford University and Wageningen University.

Read more: https://events.infovaya.com/presentation?id=85675

IROS Best Entertainment and Amusement Paper Award – Sponsored by JTCF

“Robot Learning to Paint From Demonstrations,” by Younghyo Park, Seunghun Jeon, and Taeyoon Lee from Seoul National University, KAIST, and Naver Labs.

Read more: https://events.infovaya.com/presentation?id=85681

IROS Best Paper Award on Safety, Security, and Rescue Robotics in memory of Motohiro Kisoi – Sponsored by IRS

“Power-Based Safety Layer for Aerial Vehicles in Physical Interaction Using Lyapunov Exponents,” by Eugenio Cuniato, Nicholas Lawrance, Marco Tognon, and Roland Siegwart from ETH Zurich and CSIRO.

Read more: https://events.infovaya.com/presentation?id=86266

IROS Best Paper Award on Agri-Robotics – Sponsored by YANMAR

“Explicitly Incorporating Spatial Information to Recurrent Networks for Agriculture,” by Claus Smitt, Michael Allan Halstead, Alireza Ahmadi, and Christopher Steven McCool from University of Bonn.

Read more: https://events.infovaya.com/presentation?id=86839

IROS Best Paper Award on Mobile Manipulation – Sponsored by OMRON Sinic X Corp.

“Robot Learning of Mobile Manipulation with Reachability Behavior Priors,” by Snehal Jauhri, Jan Peters, and Georgia Chalvatzaki from TU Darmstadt.

Read more: https://events.infovaya.com/presentation?id=86827

IROS Best Application Paper Award – Sponsored by ICROS

“Soft Tissue Characterisation Using a Novel Robotic Medical Percussion Device with Acoustic Analysis and Neural Network,” by Pilar Zhang Qiu, Yongxuan Tan, Oliver Thompson, Bennet Cobley, and Thrishantha Nanayakkara from Imperial College London.

Read more: https://events.infovaya.com/presentation?id=86287

IROS Best Paper Award for Industrial Robotics Research for Applications – Sponsored by Mujin Inc.

“Absolute Position Detection in 7-Phase Sensorless Electric Stepper Motor,” by Vincent Groenhuis, Gijs Rolff, Koen Bosman, Leon Abelmann, and Stefano Stramigioli from University of Twente, IMS BV, and Eye-on-Air.

Read more: https://events.infovaya.com/presentation?id=85705

Human peer tutoring is known to be effective for learning, and social robots are currently being explored for robot-assisted peer tutoring. In peer tutoring, not only the tutee but also the tutor benefit from the activity. Exploiting the learning-by-teaching mechanism, robots as tutees can be a promising approach for tutor learning. This study compares robots and humans by examining children’s learning-by-teaching with a social robot and younger children, respectively. The study comprised a small-scale field experiment in a Swedish primary school, following a within-subject design. Ten sixth-grade students (age 12–13) assigned as tutors conducted two 30 min peer tutoring sessions each, one with a robot tutee and one with a third-grade student (age 9–10) as the tutee. The tutoring task consisted of teaching the tutee to play a two-player educational game designed to promote conceptual understanding and mathematical thinking. The tutoring sessions were video recorded, and verbal actions were transcribed and extended with crucial game actions and user gestures, to explore differences in interaction patterns between the two conditions. An extension to the classical initiation–response–feedback framework for classroom interactions, the IRFCE tutoring framework, was modified and used as an analytic lens. Actors, tutoring actions, and teaching interactions were examined and coded as they unfolded in the respective child–robot and child–child interactions during the sessions. Significant differences between the robot tutee and child tutee conditions regarding action frequencies and characteristics were found, concerning tutee initiatives, tutee questions, tutor explanations, tutee involvement, and evaluation feedback. We have identified ample opportunities for the tutor to learn from teaching in both conditions, for different reasons. The child tutee condition provided opportunities to engage in explanations to the tutee, experience smooth collaboration, and gain motivation through social responsibility for the younger child. The robot tutee condition provided opportunities to answer challenging questions from the tutee, receive plenty of feedback, and communicate using mathematical language. Hence, both conditions provide good learning opportunities for a tutor, but in different ways.

Collecting temporal and spatial high-resolution environmental data can guide studies in environmental sciences to gain insights in ecological processes. The utilization of automated robotic systems to collect these types of data can maximize accuracy, resilience, and deployment rate. Furthermore, it reduces the risk to researchers deploying sensors in inaccessible environments and can significantly increase the cost-effectiveness of such studies. The introduction of transient robotic systems featuring embodied environmental sensors pushes towards building a digital ecology, while introducing only minimal disturbance to the environment. Transient robots made from fully biodegradable and non-fossil based materials, do not develop into hazardous e-waste at the end of their lifetime and can thus enable a broader adoption for environmental sensing in the real world. In this work, our approach towards the design of transient robots includes the integration of humidity-responsive materials in a glider, which is inspired by the Alsomitra macrocarpa seed. The design space of these gliders is explored and their behavior studied numerically, which allows us to make predictions on their flight characteristics. Results are validated against experiments, which show two different gliding behaviors, that can help improve the spread of the sensors. By tailoring the Cellulose-Gelatin composition of the humidity actuator, self-folding systems for selective rainwater exposure can be designed. The pH sensing layer, protected by the actuator, provides visual feedback on the pH of the rainwater. The presented methods can guide further concepts developing transient aerial robotic systems for sustainable, environmental monitoring.

A planetary exploration rover has been used for scientific missions or as a precursor for a future manned mission. The rover’s autonomous system is managed by a space-qualified, radiation-hardened onboard computer; hence, the processing performance for such a computer is strictly limited, owing to the limitation to power supply. Generally, a computationally efficient algorithm in the autonomous system is favorable. This study, therefore, presents a computationally efficient and sub-optimal trajectory planning framework for the rover. The framework exploits an incremental search algorithm, which can generate more optimal solutions as the number of iterations increases. Such an incremental search is subjected to the trade-off between trajectory optimality and computational burden. Therefore, we introduce the trajectory-quality growth rate (TQGR) to statistically analyze the relationship between trajectory optimality and computational cost. This analysis is conducted in several types of terrain, and the planning stop criterion is estimated. Furthermore, the relation between terrain features and the stop criterion is modeled offline by a machine learning technique. Then, using the criterion predicted by the model, the proposed framework appropriately interrupts the incremental search in online motion planning, resulting in a sub-optimal trajectory with less computational burden. Trajectory planning simulation in various real terrain data validates that the proposed framework can, on average, reduce the computational cost by 47.6% while maintaining 63.8% of trajectory optimality. Furthermore, the simulation result shows the proposed framework still performs well even though the planning stop criterion is not adequately predicted.

The need for robotic systems to be verified grows as robots are increasingly used in complex applications with safety implications. Model-driven engineering and domain-specific languages (DSLs) have proven useful in the development of complex systems. RoboChart is a DSL for modelling robot software controllers using state machines and a simple component model. It is distinctive in that it has a formal semantics and support for automated verification. Our work enriches RoboChart with support for modelling architectures and architectural patterns used in the robotics domain. Support is in the shape of an additional DSL, RoboArch, whose primitive concepts encapsulate the notion of a layered architecture and architectural patterns for use in the design of the layers that are only informally described in the literature. A RoboArch model can be used to generate automatically a sketch of a RoboChart model, and the rules for automatic generation define a semantics for RoboArch. Additional patterns can be formalised by extending RoboArch. In this paper, we present RoboArch, and give a perspective of how it can be used in conjunction with CorteX, a software framework developed for the nuclear industry.

This paper makes a contribution to research on digital twins that are generated from robot sensor data. We present the results of an online user study in which 240 participants were tasked to identify real-world objects from robot point cloud data. In the study we manipulated the render style (point clouds vs voxels), render resolution (i.e., density of point clouds and granularity of voxel grids), colour (monochrome vs coloured points/voxels), and motion (no motion vs rotational motion) of the shown objects to measure the impact of these attributes on object recognition performance. A statistical analysis of the study results suggests that there is a three-way interaction between our independent variables. Further analysis suggests: 1) objects are easier to recognise when rendered as point clouds than when rendered as voxels, particularly lower resolution voxels; 2) the effect of colour and motion is affected by how objects are rendered, e.g., utility of colour decreases with resolution for point clouds; 3) an increased resolution of point clouds only leads to an increased object recognition if points are coloured and static; 4) high resolution voxels outperform medium and low resolution voxels in all conditions, but there is little difference between medium and low resolution voxels; 5) motion is unable to improve the performance of voxels at low and medium resolutions, but is able to improve performance for medium and low resolution point clouds. Our results have implications for the design of robot sensor suites and data gathering and transmission protocols when creating digital twins from robot gathered point cloud data.

Although beginning to emerge, multiarticulate upper limb prostheses for children remain sparse despite the continued advancement of mechatronic technologies that have benefited adults with upper limb amputations. Upper limb prosthesis research is primarily focused on adults, even though rates of pediatric prosthetic abandonment far surpass those seen in adults. The implicit goal of a prosthesis is to provide effective functionality while promoting healthy social interaction. Yet most current pediatric devices offer a single degree of freedom open/close grasping function, a stark departure from the multiple grasp configurations provided in advanced adult devices. Although comparable child-sized devices are on the clinical horizon, understanding how to effectively translate these technologies to the pediatric population is vital. This includes exploring grasping movements that may provide the most functional benefits and techniques to control the newly available dexterity. Currently, no dexterous pediatric research platforms exist that offer open access to hardware and programming to facilitate the investigation and provision of multi-grasp function. Our objective was to deliver a child-sized multi-grasp prosthesis that may serve as a robust research platform. In anticipation of an open-source release, we performed a comprehensive set of benchtop and functional tests with common household objects to quantify the performance of our device. This work discusses and evaluates our pediatric-sized multiarticulate prosthetic hand that provides 6 degrees of actuation, weighs 177 g and was designed specifically for ease of implementation in a research or clinical-research setting. Through the benchtop and validated functional tests, the pediatric hand produced grasping forces ranging from 0.424–7.216 N and was found to be comparable to the functional capabilities of similar adult devices. As mechatronic technologies advance and multiarticulate prostheses continue to evolve, translating many of these emerging technologies may help provide children with more useful and functional prosthesis options. Effective translation will inevitably require a solid scientific foundation to inform how best to prescribe advanced prosthetic devices and control systems for children. This work begins addressing these current gaps by providing a much-needed research platform with supporting data to facilitate its use in laboratory and clinical research settings.

Automated shuttles are already seeing deployment in many places across the world and have the potential to transform public mobility to be safer and more accessible. During the current transition phase from fully manual vehicles toward higher degrees of automation and resulting mixed traffic, there is a heightened need for additional communication or external indicators to comprehend automated vehicle actions for other road users. In this work, we present and discuss the results from seven studies (three preparatory and four main studies) conducted in three European countries aimed at investigating and providing a variety of such external communication solutions to facilitate the exchange of information between automated shuttles and other motorized and non-motorized road users.

Multi-agent task allocation methods seek to distribute a set of tasks fairly amongst a set of agents. In real-world settings, such as soft fruit farms, human labourers undertake harvesting tasks. The harvesting workforce is typically organised by farm manager(s) who assign workers to the fields that are ready to be harvested and team leaders who manage the workers in the fields. Creating these assignments is a dynamic and complex problem, as the skill of the workforce and the yield (quantity of ripe fruit picked) are variable and not entirely predictable. The work presented here posits that multi-agent task allocation methods can assist farm managers and team leaders to manage the harvesting workforce effectively and efficiently. There are three key challenges faced when adapting multi-agent approaches to this problem: (i) staff time (and thus cost) should be minimised; (ii) tasks must be distributed fairly to keep staff motivated; and (iii) the approach must be able to handle incremental (incomplete) data as the season progresses. An adapted variation of Round Robin (RR) is proposed for the problem of assigning workers to fields, and market-based task allocation mechanisms are applied to the challenge of assigning tasks to workers within the fields. To evaluate the approach introduced here, experiments are performed based on data that was supplied by a large commercial soft fruit farm for the past two harvesting seasons. The results demonstrate that our approach produces appropriate worker-to-field allocations. Moreover, simulated experiments demonstrate that there is a “sweet spot” with respect to the ratio between two types of in-field workers.

The flexibility and efficiency in parts production can be significantly increased through the technological cooperation of industrial robots and machine tools. The paper presents an approach in which a robot, in addition to the classic handling tasks, enhance machine tools by additional manufacturing technologies and thus beneficially supports workpiece machining. This can take place in various configurations, starting with pre- and final machining by the robot outside the machine, through sequential cooperative machining of the workpiece clamped in the machine, to parallel, synchronized machining of a workpiece in the machine. The approach results in a novel type of collaborative manufacturing equipment for matrix production that will improve the versatility, efficiency and profitability in production.

Previous research in human-robot interaction has explored using robots to increase objective and hedonic aspects of well-being and quality of life, but there is no literature on how robots might be used to support eudaimonic aspects of well-being (such as meaning in life). A sense of meaning has been shown to positively affect health and longevity. We frame our study around the Japanese concept of ikigai, which is widely used with Japanese older adults to enhance their everyday lives, and is closely related to the concept of eudaimonic well-being (EWB) known in Western countries. Using a mixed-methods and exploratory approach, including interviews with 17 older adults and the collection of 100 survey responses, we explored how older adults in the US experience a sense of meaning, and if and how a social robot could be used to help foster this sense. We find that meaning for older adults is often obtained by helping others, through family connections, and/or through activities of daily life, and that sources of meaning often differ based on the older adults’ living situation. Assessing how meaning compares to happiness and social connection, we highlight general similarities and differences, and also find that living situation influences older adults’ sources of happiness, desire for social connection, and barriers to well-being., in addition to companionship and happiness having a weaker correlation with meaning for those who live alone than for those who live with others. Additionally, we evaluated initial perceptions of a social robot (QT) meant to enhance ikigai and overall well-being. Finding mostly positive perceptions, though those who live alone also reported being less willing to adopt a social robot into their homes. Using both data collected on older adults’ meaning and the potential use of QT to support meaning, we make several design recommendations with regards to using robots to enhance ikigai, such as by prompting daily reflecting, enhancing family bonds, and suggesting new experiences and volunteer opportunities.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IROS 2022: 23–27 October 2022, KYOTO, JAPANANA Avatar XPRIZE Finals: 4–5 November 2022, LOS ANGELESCoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today’s videos!

Imagine being able to control a swarm of drones with just your hands or gestures. In this video, we explore a future concept-of-operations for swarm management and how large groups of robots and drones will be able to interact and work together.

[ Dronisos ]

There’s a new Mini Pupper on Kickstarter, now with ROS 2!

[ Kickstarter ]

Researchers created a method for magnetically programming materials to make cubes that are very picky about who they connect with, enabling more scalable self-assembly.

Paper at IROS next week!

[ MIT CSAIL ]

Thanks, Rachel!

This summer, we held a contest seeking ideas robots inspired by nature, that could help the world. And then we made the winning idea into a real working prototype! The winner this year was “Gillbert” by Eleanor Mackinstosh, a robotic fish that filters microplastics using its gills.

[ Natural Robotics Contest ]

Thanks, Rob!

I’ve never seen a real centaur climb up onto a block while carrying a payload, but I bet it would look almost exactly like Centauro doing it.

[ Paper ]

Thanks, Ioannis!

Enjoy our favorite obstacle avoidance highlights from the Skydio community! They make showcasing the intellect of our software sublimely easy. The power of autonomous cinematography is displayed best by our incredible Skydians!

That last clip is especially impressive, since if you look closely, you can see the drone avoiding a wire while flying directly toward the setting sun.

[ Skydio ]

Somehow I missed this adorable little robot of questionable usefulness from Sony.

Meet poiq, your future buddy robot. Its AI gets smarter and more individualized through questions and conversations with users. Sony is reimagining communication and connection, and developing one-of-a-kind friendships between humans and robots in the process.

[ Sony ]

Spot’s got permission to dance! Check out this dance created for the “BTS Yet To Come in BUSAN” concert.

[ Boston Dynamics ]

Awawa, awawa...

[ ICD Lab ]

Ascento, on patrol.

[ Ascento Robotics ]

Here’s what happens if you grab a Wing delivery drone’s cable and start running.

[ Wing ]

Detecting an overheating motor can be the difference between a $1,000 repair or a $50,000 replacement. As a result, routine thermal inspections are a major part of predictive maintenance operations, but collecting this valuable information frequently is still a challenge in many facilities. Agile mobile robots like Spot are transforming condition monitoring with dynamic sensing, so industrial teams can make the most of their predictive maintenance programs.

[ Boston Dynamics ]

Robotnik is specialized in the development of industrial robotic applications based in mobile robots and mobile manipulators. Here [are] some AMR developed and manufactured by us.

[ Robotnik ]

How many robot dogs does it take to explore a football field? Fewer than it would if they weren’t working together, that’s for sure.

[ Deep Robotics ]

During Summer 2022 our group demoed ANYmal and Spot carrying out in the context of construction progress monitoring at Costain’s Gatwick Airport Train Station site. This was the final demo of the MEMMO Horizon Europe Project.

[ Oxford ]

Lex Fridman interviews Kate Darling.

[ Lex Fridman ]

In this week’s CMU RI Seminar, Nidhi Kalra from The RAND Corporation answers the question, “What (else) can you do with a robotics degree?”

[ CMU RI ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

IROS 2022: 23–27 October 2022, KYOTO, JAPANANA Avatar XPRIZE Finals: 4–5 November 2022, LOS ANGELESCoRL 2022: 14–18 December 2022, AUCKLAND, NEW ZEALAND

Enjoy today’s videos!

Imagine being able to control a swarm of drones with just your hands or gestures. In this video, we explore a future concept-of-operations for swarm management and how large groups of robots and drones will be able to interact and work together.

[ Dronisos ]

There’s a new Mini Pupper on Kickstarter, now with ROS 2!

[ Kickstarter ]

Researchers created a method for magnetically programming materials to make cubes that are very picky about who they connect with, enabling more scalable self-assembly.

Paper at IROS next week!

[ MIT CSAIL ]

Thanks, Rachel!

This summer, we held a contest seeking ideas robots inspired by nature, that could help the world. And then we made the winning idea into a real working prototype! The winner this year was “Gillbert” by Eleanor Mackinstosh, a robotic fish that filters microplastics using its gills.

[ Natural Robotics Contest ]

Thanks, Rob!

I’ve never seen a real centaur climb up onto a block while carrying a payload, but I bet it would look almost exactly like Centauro doing it.

[ Paper ]

Thanks, Ioannis!

Enjoy our favorite obstacle avoidance highlights from the Skydio community! They make showcasing the intellect of our software sublimely easy. The power of autonomous cinematography is displayed best by our incredible Skydians!

That last clip is especially impressive, since if you look closely, you can see the drone avoiding a wire while flying directly toward the setting sun.

[ Skydio ]

Somehow I missed this adorable little robot of questionable usefulness from Sony.

Meet poiq, your future buddy robot. Its AI gets smarter and more individualized through questions and conversations with users. Sony is reimagining communication and connection, and developing one-of-a-kind friendships between humans and robots in the process.

[ Sony ]

Spot’s got permission to dance! Check out this dance created for the “BTS Yet To Come in BUSAN” concert.

[ Boston Dynamics ]

Awawa, awawa...

[ ICD Lab ]

Ascento, on patrol.

[ Ascento Robotics ]

Here’s what happens if you grab a Wing delivery drone’s cable and start running.

[ Wing ]

Detecting an overheating motor can be the difference between a $1,000 repair or a $50,000 replacement. As a result, routine thermal inspections are a major part of predictive maintenance operations, but collecting this valuable information frequently is still a challenge in many facilities. Agile mobile robots like Spot are transforming condition monitoring with dynamic sensing, so industrial teams can make the most of their predictive maintenance programs.

[ Boston Dynamics ]

Robotnik is specialized in the development of industrial robotic applications based in mobile robots and mobile manipulators. Here [are] some AMR developed and manufactured by us.

[ Robotnik ]

How many robot dogs does it take to explore a football field? Fewer than it would if they weren’t working together, that’s for sure.

[ Deep Robotics ]

During Summer 2022 our group demoed ANYmal and Spot carrying out in the context of construction progress monitoring at Costain’s Gatwick Airport Train Station site. This was the final demo of the MEMMO Horizon Europe Project.

[ Oxford ]

Lex Fridman interviews Kate Darling.

[ Lex Fridman ]

In this week’s CMU RI Seminar, Nidhi Kalra from The RAND Corporation answers the question, “What (else) can you do with a robotics degree?”

[ CMU RI ]

We present QUaRTM – a novel quadcopter design capable of tilting the propellers into the forward flight direction, which reduces the drag area and therefore allows for faster, more agile, and more efficient flight. The vehicle can morph between two configurations in mid-air, including the untilted configuration and the tilted configuration. The vehicle in the untilted configuration has a higher pitch torque capacity and a smaller vertical dimension. The vehicle in the tilted configuration has a lower drag area, leading to a higher top speed, higher agility at high speed, and better flight efficiency. The morphing is accomplished without any additional actuators beyond the four motors of a quadcopter. The rigid connections between the quadcopter frame and the quadcopter arms are replaced with sprung hinges. This allows the propellers to be tilted when high thrusts are produced, and recover to the untilted configuration when the thrusts are brought low. The effectiveness of such a vehicle is demonstrated by running experiments on a prototype vehicle with a shape similar to a regular quadcopter. Through the use of tilting, the vehicle is shown to have a 12.5% higher maximum speed, better high-speed agility as the maximum crash-free cruise speed increased by 7.5%, and a better flight efficiency as the power consumption has dropped by more than 20% in the speed range of 15–20 m s−1.



Elon Musk, step aside. You may be the richest rich man in the space business, but you’re not first. Musk’s SpaceX corporation is a powerful force, with its weekly launches and visions of colonizing Mars. But if you want a broader view of how wealthy entrepreneurs have shaped space exploration, you might want to look at George Ellery Hale, James Lick, William McDonald or—remember this name—John D. Hooker.

All this comes up now because SpaceX, joining forces with the billionaire Jared Isaacman, has made what sounds at first like a novel proposal to NASA: It would like to see if one of the company’s Dragon spacecraft can be sent to service the fabled, invaluable (and aging) Hubble Space Telescope, last repaired in 2009.

Private companies going to the rescue of one of NASA’s crown jewels? NASA’s mantra in recent years has been to let private enterprise handle the day-to-day of space operations—communications satellites, getting astronauts to the space station, and so forth—while pure science, the stuff that makes history but not necessarily money, remains the province of government. Might that model change?

“We’re working on crazy ideas all the time,” said Thomas Zurbuchen, NASA’s space science chief. "Frankly, that’s what we’re supposed to do.”

It’s only a six-month feasibility study for now; no money will change hands between business and NASA. But Isaacman, who made his fortune in payment-management software before turning to space, suggested that if a Hubble mission happens, it may lead to other things. “Alongside NASA, exploration is one of many objectives for the commercial space industry,” he said on a media teleconference. “And probably one of the greatest exploration assets of all time is the Hubble Space Telescope.”

So it’s possible that at some point in the future, there may be a SpaceX Dragon, perhaps with Isaacman as a crew member, setting out to grapple the Hubble, boost it into a higher orbit, maybe even replace some worn-out components to lengthen its life.

Aerospace companies say privately mounted repair sounds like a good idea. So good that they’ve proposed it already.

The Chandra X-ray telescope, as photographed by space-shuttle astronauts after they deployed it in July 1999. It is attached to a booster that moved it into an orbit 10,000 by 100,000 kilometers from Earth.NASA

Northrop Grumman, one of the United States’ largest aerospace contractors, has quietly suggested to NASA that it might service one of the Hubble’s sister telescopes, the Chandra X-ray Observatory. Chandra was launched into Earth orbit by the space shuttle Columbia in 1999 (Hubble was launched from the shuttle Discovery in 1990), and the two often complement each other, observing the same celestial phenomena at different wavelengths.

As in the case of the SpaceX/Hubble proposal, Northrop Grumman’s Chandra study is at an early stage. But there are a few major differences. For one, Chandra was assembled by TRW, a company that has since been bought by Northrop Grumman. And another company subsidiary, SpaceLogistics, has been sending what it calls Mission Extension Vehicles (MEVs) to service aging Intelsat communications satellites since 2020. Two of these robotic craft have launched so far. The MEVs act like space tugs, docking with their target satellites to provide them with attitude control and propulsion if their own systems are failing or running out of fuel. SpaceLogistics says it is developing a next-generation rescue craft, which it calls a Mission Robotic Vehicle, equipped with an articulated arm to add, relocate, or possibly repair components on orbit.

“We want to see if we can apply this to space-science missions,” says Jon Arenberg, Northrop Grumman’s chief mission architect for science and robotic exploration, who worked on Chandra and, later, the James Webb Space Telescope. He says a major issue for servicing is the exacting specifications needed for NASA’s major observatories; Chandra, for example, records the extremely short wavelengths of X-ray radiation (0.01–10 nanometers).

“We need to preserve the scientific integrity of the spacecraft,” he says. “That’s an absolute.”

But so far, the company says, a mission seems possible. NASA managers have listened receptively. And Northrop Grumman says a servicing mission could be flown for a fraction of the cost of a new telescope.

New telescopes need not be government projects. In fact, NASA’s chief economist, Alexander MacDonald, argues that almost all of America’s greatest observatories were privately funded until Cold War politics made government the major player in space exploration. That’s why this story began with names from the 19th and 20th centuries—Hale, Lick, and McDonald—to which we should add Charles Yerkes and, more recently, William Keck. These were arguably the Elon Musks of their times—entrepreneurs who made millions in oil, iron, or real estate before funding the United States’ largest telescopes. (Hale’s father manufactured elevators—highly profitable in the rebuilding after the Great Chicago Fire of 1871.) The most ambitious observatories, MacDonald calculated for his book The Long Space Age, were about as expensive back then as some of NASA’s modern planetary probes. None of them had very much to do with government.

To be sure, government will remain a major player in space for a long time. “NASA pays the cost, predominantly, of the development of new commercial crew vehicles, SpaceX’s Dragon being one,” MacDonald says. “And now that those capabilities exist, private individuals can also pay to utilize those capabilities.” Isaacman doesn’t have to build a spacecraft; he can hire one that SpaceX originally built for NASA.

“I think that creates a much more diverse and potentially interesting space-exploration future than we have been considering for some time,” MacDonald says.

So put these pieces together: Private enterprise has been a driver of space science since the 1800s. Private companies are already conducting on-orbit satellite rescues. NASA hasn’t said no to the idea of private missions to service its orbiting observatories.

And why does John D. Hooker’s name matter? In 1906, he agreed to put up US $45,000 (about $1.4 million today) to make the mirror for a 100-inch reflecting telescope at Mount Wilson, Calif. One astronomer made the Hooker Telescope famous by using it to determine that the universe, full of galaxies, was expanding.

The astronomer’s name was Edwin Hubble. We’ve come full circle.



Elon Musk, step aside. You may be the richest rich man in the space business, but you’re not first. Musk’s SpaceX corporation is a powerful force, with its weekly launches and visions of colonizing Mars. But if you want a broader view of how wealthy entrepreneurs have shaped space exploration, you might want to look at George Ellery Hale, James Lick, William McDonald or—remember this name—John D. Hooker.

All this comes up now because SpaceX, joining forces with the billionaire Jared Isaacman, has made what sounds at first like a novel proposal to NASA: It would like to see if one of the company’s Dragon spacecraft can be sent to service the fabled, invaluable (and aging) Hubble Space Telescope, last repaired in 2009.

Private companies going to the rescue of one of NASA’s crown jewels? NASA’s mantra in recent years has been to let private enterprise handle the day-to-day of space operations—communications satellites, getting astronauts to the space station, and so forth—while pure science, the stuff that makes history but not necessarily money, remains the province of government. Might that model change?

“We’re working on crazy ideas all the time,” said Thomas Zurbuchen, NASA’s space science chief. "Frankly, that’s what we’re supposed to do.”

It’s only a six-month feasibility study for now; no money will change hands between business and NASA. But Isaacman, who made his fortune in payment-management software before turning to space, suggested that if a Hubble mission happens, it may lead to other things. “Alongside NASA, exploration is one of many objectives for the commercial space industry,” he said on a media teleconference. “And probably one of the greatest exploration assets of all time is the Hubble Space Telescope.”

So it’s possible that at some point in the future, there may be a SpaceX Dragon, perhaps with Isaacman as a crew member, setting out to grapple the Hubble, boost it into a higher orbit, maybe even replace some worn-out components to lengthen its life.

Aerospace companies say privately mounted repair sounds like a good idea. So good that they’ve proposed it already.

The Chandra X-ray telescope, as photographed by space-shuttle astronauts after they deployed it in July 1999. It is attached to a booster that moved it into an orbit 10,000 by 100,000 kilometers from Earth.NASA

Northrop Grumman, one of the United States’ largest aerospace contractors, has quietly suggested to NASA that it might service one of the Hubble’s sister telescopes, the Chandra X-ray Observatory. Chandra was launched into Earth orbit by the space shuttle Columbia in 1999 (Hubble was launched from the shuttle Discovery in 1990), and the two often complement each other, observing the same celestial phenomena at different wavelengths.

As in the case of the SpaceX/Hubble proposal, Northrop Grumman’s Chandra study is at an early stage. But there are a few major differences. For one, Chandra was assembled by TRW, a company that has since been bought by Northrop Grumman. And another company subsidiary, SpaceLogistics, has been sending what it calls Mission Extension Vehicles (MEVs) to service aging Intelsat communications satellites since 2020. Two of these robotic craft have launched so far. The MEVs act like space tugs, docking with their target satellites to provide them with attitude control and propulsion if their own systems are failing or running out of fuel. SpaceLogistics says it is developing a next-generation rescue craft, which it calls a Mission Robotic Vehicle, equipped with an articulated arm to add, relocate, or possibly repair components on orbit.

“We want to see if we can apply this to space-science missions,” says Jon Arenberg, Northrop Grumman’s chief mission architect for science and robotic exploration, who worked on Chandra and, later, the James Webb Space Telescope. He says a major issue for servicing is the exacting specifications needed for NASA’s major observatories; Chandra, for example, records the extremely short wavelengths of X-ray radiation (0.01–10 nanometers).

“We need to preserve the scientific integrity of the spacecraft,” he says. “That’s an absolute.”

But so far, the company says, a mission seems possible. NASA managers have listened receptively. And Northrop Grumman says a servicing mission could be flown for a fraction of the cost of a new telescope.

New telescopes need not be government projects. In fact, NASA’s chief economist, Alexander MacDonald, argues that almost all of America’s greatest observatories were privately funded until Cold War politics made government the major player in space exploration. That’s why this story began with names from the 19th and 20th centuries—Hale, Lick, and McDonald—to which we should add Charles Yerkes and, more recently, William Keck. These were arguably the Elon Musks of their times—entrepreneurs who made millions in oil, iron, or real estate before funding the United States’ largest telescopes. (Hale’s father manufactured elevators—highly profitable in the rebuilding after the Great Chicago Fire of 1871.) The most ambitious observatories, MacDonald calculated for his book The Long Space Age, were about as expensive back then as some of NASA’s modern planetary probes. None of them had very much to do with government.

To be sure, government will remain a major player in space for a long time. “NASA pays the cost, predominantly, of the development of new commercial crew vehicles, SpaceX’s Dragon being one,” MacDonald says. “And now that those capabilities exist, private individuals can also pay to utilize those capabilities.” Isaacman doesn’t have to build a spacecraft; he can hire one that SpaceX originally built for NASA.

“I think that creates a much more diverse and potentially interesting space-exploration future than we have been considering for some time,” MacDonald says.

So put these pieces together: Private enterprise has been a driver of space science since the 1800s. Private companies are already conducting on-orbit satellite rescues. NASA hasn’t said no to the idea of private missions to service its orbiting observatories.

And why does John D. Hooker’s name matter? In 1906, he agreed to put up US $45,000 (about $1.4 million today) to make the mirror for a 100-inch reflecting telescope at Mount Wilson, Calif. One astronomer made the Hooker Telescope famous by using it to determine that the universe, full of galaxies, was expanding.

The astronomer’s name was Edwin Hubble. We’ve come full circle.

Pages