Feed aggregator



Robots are well known for having consistency and precision that humans tend to lack. Robots are also well known for not being especially creative—depending I suppose on your definition of “creative.” Either way, roboticists have seized an opportunity to match the strengths of humans and robots while plastering over their respective weaknesses.

At CHI 2022, researchers from ETH Zurich presented an interactive robotic plastering system that lets artistic humans use augmented reality to create three-dimensional designs meant to be sprayed in plaster on bare walls by robotic arms.

Robotic fabrication is not a new idea. And there are lots of examples of robots building intricate structures, leveraging their penchant for precision and other robot qualities to place components in careful, detailed patterns that yield unique architectures. This algorithmic approach is certainly artistic on its own, but not quite as much as when humans are in the loop. Toss a human into the mix, and you get stuff like this:

I’m honestly not sure whether a human would be able to effectuate something with that level of complexity, but I’m fairly sure that if a human could do that, they wouldn’t be able to do it as quickly or repeatably as the robot can. The beauty of this innovation (besides what ends up on the wall) is the way the software helps human designers be even more creative (or to formalize and express their creativity in novel ways), while offloading all of the physically difficult tasks to the machine. Seeing this—perhaps naively—I feel like I could jump right in there and design my own 3D wall art (which I would totally do, given the chance).

A variety of filter systems can translate human input to machine output in different styles.

And maybe that’s the broader idea here: that robots are able to slightly democratize some tasks that otherwise would require an impractical amount of experience and skill. In this example, it’s not that the robot would replace a human expert; the machine would let the human create plaster designs in a completely different way with completely different results from what human hands could generate unassisted. The robotic system is offering a new kind of interface that enables a new kind of art that wouldn’t be possible otherwise and that doesn’t require a specific kind of expertise. It’s not better or worse; it’s just a different approach to design and construction.

Future instantiations of this system will hopefully be easier to use; as a research project, it requires a lot of calibration and the hardware can be a bit of a hassle to manage. The researchers say they hope to improve the state of play significantly by making everything more self-contained and easier to access remotely. That will eliminate the need for designers to be on-site. While a system like this will likely never be cheap, I’m imagining a point at which you might be able to rent one for a couple of days for your own home, so you can add texture (and perhaps eventually color?) that will give you one-of-a-kind walls and rooms.

Interactive Robotic Plastering: Augmented Interactive Design and Fabrication for On-site Robotic Plastering, by Daniela Mitterberger, Selen Ercan Jenny, Lauren Vasey, Ena Lloret-Fritschi, Petrus Aejmelaeus-Lindström, Fabio Gramazio, and Matthias Kohler from ETH Zurich, was presented at CHI 2022.



Robots are well known for having consistency and precision that humans tend to lack. Robots are also well known for not being especially creative—depending I suppose on your definition of “creative.” Either way, roboticists have seized an opportunity to match the strengths of humans and robots while plastering over their respective weaknesses.

At CHI 2022, researchers from ETH Zurich presented an interactive robotic plastering system that lets artistic humans use augmented reality to create three-dimensional designs meant to be sprayed in plaster on bare walls by robotic arms.

Robotic fabrication is not a new idea. And there are lots of examples of robots building intricate structures, leveraging their penchant for precision and other robot qualities to place components in careful, detailed patterns that yield unique architectures. This algorithmic approach is certainly artistic on its own, but not quite as much as when humans are in the loop. Toss a human into the mix, and you get stuff like this:

I’m honestly not sure whether a human would be able to effectuate something with that level of complexity, but I’m fairly sure that if a human could do that, they wouldn’t be able to do it as quickly or repeatably as the robot can. The beauty of this innovation (besides what ends up on the wall) is the way the software helps human designers be even more creative (or to formalize and express their creativity in novel ways), while offloading all of the physically difficult tasks to the machine. Seeing this—perhaps naively—I feel like I could jump right in there and design my own 3D wall art (which I would totally do, given the chance).

A variety of filter systems can translate human input to machine output in different styles.

And maybe that’s the broader idea here: that robots are able to slightly democratize some tasks that otherwise would require an impractical amount of experience and skill. In this example, it’s not that the robot would replace a human expert; the machine would let the human create plaster designs in a completely different way with completely different results from what human hands could generate unassisted. The robotic system is offering a new kind of interface that enables a new kind of art that wouldn’t be possible otherwise and that doesn’t require a specific kind of expertise. It’s not better or worse; it’s just a different approach to design and construction.

Future instantiations of this system will hopefully be easier to use; as a research project, it requires a lot of calibration and the hardware can be a bit of a hassle to manage. The researchers say they hope to improve the state of play significantly by making everything more self-contained and easier to access remotely. That will eliminate the need for designers to be on-site. While a system like this will likely never be cheap, I’m imagining a point at which you might be able to rent one for a couple of days for your own home, so you can add texture (and perhaps eventually color?) that will give you one-of-a-kind walls and rooms.

Interactive Robotic Plastering: Augmented Interactive Design and Fabrication for On-site Robotic Plastering, by Daniela Mitterberger, Selen Ercan Jenny, Lauren Vasey, Ena Lloret-Fritschi, Petrus Aejmelaeus-Lindström, Fabio Gramazio, and Matthias Kohler from ETH Zurich, was presented at CHI 2022.

Engineering robot personalities is a challenge of multiple folds. Every robot that interacts with humans is an individual physical presence that may require their own personality. Thus, robot personalities engineers face a problem that is the reverse of that of personality psychologists: robot personalities engineers need to make batches of identical robots into individual personalities, as oppose to formulating comprehensive yet parsimonious descriptions of individual personalities that already exist. The robot personality research so far has been fruitful in demonstrating the positive effects of robot personality but unfruitful in insights into how robot personalities can be engineered in significant quantities. To engineer robot personalities for mass-produced robots we need a generative personality model with a structure to encode a robot’s individual characteristics as personality traits and generate behaviour with inter- and intra-individual differences that reflect those characteristics. We propose a generative personality model shaped by goals as part of a personality AI for robots towards which we have been working, and we conducted tests to investigate how many individual personalities the model can practically support when it is used for expressing personalities via non-verbal behaviour on the heads of humanoid robots.

Artificial audition aims at providing hearing capabilities to machines, computers and robots. Existing frameworks in robot audition offer interesting sound source localization, tracking and separation performance, although involve a significant amount of computations that limit their use on robots with embedded computing capabilities. This paper presents ODAS, the Open embeddeD Audition System framework, which includes strategies to reduce the computational load and perform robot audition tasks on low-cost embedded computing systems. It presents key features of ODAS, along with cases illustrating its uses in different robots and artificial audition applications.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2022: 23–27 May 2022, PHILADELPHIAIEEE ARSO 2022: 28–30 May 2022, LONG BEACH, CALIF.RSS 2022: 27 June–1 July 2022, NEW YORK CITYERF 2022: 28–30 June 2022, ROTTERDAM, NETHERLANDSRoboCup 2022: 11–17 July 2022, BANGKOKIEEE CASE 2022: 20–24 August 2022, MEXICO CITYCLAWAR 2022: 12–14 September 2022, AZORES, PORTUGAL

Enjoy today’s videos!

What a strange position for Boston Dynamics to be in, having to contend with the fact that its robots are at this point likely best known for dancing rather than for being useful in a more productivity-minded way:

Boston Dynamics is also announcing some upgrades for Spot:

[ Boston Dynamics ]

MIT CSAIL has developed a new way to rapidly design and fabricate soft pneumatic actuators with integrated sensing. Such actuators can be used as the backbone in a variety of applications such as assistive wearables, robotics, and rehabilitative technologies.

[ MIT ]

The Sechseläuten (“the six o’clock ringing of the bells”) is a traditional spring holiday in the Swiss city of Zurich, and this year, it had a slightly less traditional guest: ANYmal!

[ Swiss-Mile ]

Thanks, Marko!

Working in collaboration with domestic appliances manufacturer Beko, researchers from the University of Cambridge trained their robot chef to assess the saltiness of a dish at different stages of the chewing process, imitating a similar process in humans. Their results could be useful in the development of automated or semi-automated food preparation by helping robots to learn what tastes good and what doesn’t, making them better cooks.

[ Cambridge ]

More impressive work from the UZH Robotics and Perception Group, teaching racing quadrotors to adapt on the fly to a changing course:

[ RPG ]

In the SANDRo Project, funded by DIH-HERO, PAL Robotics and Heemskerk Innovation Technology are developing the TIAGo robot to provide assistive services to people with difficulties in the activities of daily living.

[ PAL Robotics ]

For drones to autonomously perform necessary but quotidian tasks, such as delivering packages or airlifting injured drivers from a traffic accident, drones must be able to adapt to wind conditions in real time—rolling with the punches, meteorologically speaking. To face this challenge, a team of engineers from Caltech has developed Neural-Fly, a deep-learning method that can help drones cope with new and unknown wind conditions in real time just by updating a few key parameters.

[ Caltech ]

On May 17th, the Furhat Conference on Social Robotics returns with a new lineup of experts who will share their latest cutting edge research and innovation projects using social robots and conversational AI. Since Furhat Robotics’ recent acquisition of Misty Robotics, a brand new face will make an appearance—the Misty robot! Registration for the conference is free and now open.

[ Furhat Conference ]

Thanks, Chris!

This is quite a contest: Draw your best idea for a robot inspired by nature, and if you win, a bunch of robotics experts will actually build it!

[ Natural Robotics Contest ]

Thanks, Robert!

Franka Production 3 is the force sensitive robot platform made in Germany, a system that ignites productivity for everyone who needs industrial robotics automation.

[ Franka ]

Thailand is equipping vocational students with robotic skills to cater to the anticipated demand for 200,000 robotics-trained workers by 2024. More and more factories are moving to Thailand, hence education plays an important role to educate the students in Industry 4.0 knowledge.

[ Kuka ]

Dusty Robotics develops robot-powered tools for the modern construction workforce, using cutting-edge robotics technology that is built in-house from the ground up. Our engineers design the mechanical, electrical, firmware, robotics, and software components that power ultra-precise mobile printers. Hear from Dusty engineers about what it’s like to work at Dusty and the impact their work has—every day.

[ Dusty ]

One in three older adults falls every year, leading to a serious healthcare problem in the United States. A team of Stanford scholars are developing wearable robotics to help people restore their balance to prevent these falls. Karen Lu, associate professor of computer science, and Steve Collins, associate professor of mechanical engineering, explain how an intelligent exoskeleton could enhance people’s mobility.

[ Stanford HAI ]

The latest episode of the Robot Brains Podcast features Skydio CEO Adam Bry.

[ Robot Brains ]

This week’s CMU RI Seminar is by Ross L. Hatton from Oregon State, on “Snakes & Spiders, Robots & Geometry.”

[ CMU RI ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2022: 23–27 May 2022, PHILADELPHIAIEEE ARSO 2022: 28–30 May 2022, LONG BEACH, CALIF.RSS 2022: 27 June–1 July 2022, NEW YORK CITYERF 2022: 28–30 June 2022, ROTTERDAM, NETHERLANDSRoboCup 2022: 11–17 July 2022, BANGKOKIEEE CASE 2022: 20–24 August 2022, MEXICO CITYCLAWAR 2022: 12–14 September 2022, AZORES, PORTUGAL

Enjoy today’s videos!

What a strange position for Boston Dynamics to be in, having to contend with the fact that its robots are at this point likely best known for dancing rather than for being useful in a more productivity-minded way:

Boston Dynamics is also announcing some upgrades for Spot:

[ Boston Dynamics ]

MIT CSAIL has developed a new way to rapidly design and fabricate soft pneumatic actuators with integrated sensing. Such actuators can be used as the backbone in a variety of applications such as assistive wearables, robotics, and rehabilitative technologies.

[ MIT ]

The Sechseläuten (“the six o’clock ringing of the bells”) is a traditional spring holiday in the Swiss city of Zurich, and this year, it had a slightly less traditional guest: ANYmal!

[ Swiss-Mile ]

Thanks, Marko!

Working in collaboration with domestic appliances manufacturer Beko, researchers from the University of Cambridge trained their robot chef to assess the saltiness of a dish at different stages of the chewing process, imitating a similar process in humans. Their results could be useful in the development of automated or semi-automated food preparation by helping robots to learn what tastes good and what doesn’t, making them better cooks.

[ Cambridge ]

More impressive work from the UZH Robotics and Perception Group, teaching racing quadrotors to adapt on the fly to a changing course:

[ RPG ]

In the SANDRo Project, funded by DIH-HERO, PAL Robotics and Heemskerk Innovation Technology are developing the TIAGo robot to provide assistive services to people with difficulties in the activities of daily living.

[ PAL Robotics ]

For drones to autonomously perform necessary but quotidian tasks, such as delivering packages or airlifting injured drivers from a traffic accident, drones must be able to adapt to wind conditions in real time—rolling with the punches, meteorologically speaking. To face this challenge, a team of engineers from Caltech has developed Neural-Fly, a deep-learning method that can help drones cope with new and unknown wind conditions in real time just by updating a few key parameters.

[ Caltech ]

On May 17th, the Furhat Conference on Social Robotics returns with a new lineup of experts who will share their latest cutting edge research and innovation projects using social robots and conversational AI. Since Furhat Robotics’ recent acquisition of Misty Robotics, a brand new face will make an appearance—the Misty robot! Registration for the conference is free and now open.

[ Furhat Conference ]

Thanks, Chris!

This is quite a contest: Draw your best idea for a robot inspired by nature, and if you win, a bunch of robotics experts will actually build it!

[ Natural Robotics Contest ]

Thanks, Robert!

Franka Production 3 is the force sensitive robot platform made in Germany, a system that ignites productivity for everyone who needs industrial robotics automation.

[ Franka ]

Thailand is equipping vocational students with robotic skills to cater to the anticipated demand for 200,000 robotics-trained workers by 2024. More and more factories are moving to Thailand, hence education plays an important role to educate the students in Industry 4.0 knowledge.

[ Kuka ]

Dusty Robotics develops robot-powered tools for the modern construction workforce, using cutting-edge robotics technology that is built in-house from the ground up. Our engineers design the mechanical, electrical, firmware, robotics, and software components that power ultra-precise mobile printers. Hear from Dusty engineers about what it’s like to work at Dusty and the impact their work has—every day.

[ Dusty ]

One in three older adults falls every year, leading to a serious healthcare problem in the United States. A team of Stanford scholars are developing wearable robotics to help people restore their balance to prevent these falls. Karen Lu, associate professor of computer science, and Steve Collins, associate professor of mechanical engineering, explain how an intelligent exoskeleton could enhance people’s mobility.

[ Stanford HAI ]

The latest episode of the Robot Brains Podcast features Skydio CEO Adam Bry.

[ Robot Brains ]

This week’s CMU RI Seminar is by Ross L. Hatton from Oregon State, on “Snakes & Spiders, Robots & Geometry.”

[ CMU RI ]

Global navigation satellite system (GNSS) positioning has recently garnered attention for autonomous driving, machine control, and construction sites. With the development of low-cost multi-GNSS receivers and the advent of new types of GNSS, such as Japan’s Quasi-Zenith Satellite System, the potential of GNSS positioning has increased. New types of GNSS directly increase the number of line-of-sight (LOS) signals in dense urban areas and improve positioning accuracy. However, GNSS receivers can observe both LOS and non-line-of-sight (NLOS) signals in dense urban areas, and more NLOS signals are observed under static conditions than under dynamic conditions. The classification of LOS and NLOS signals is important, and various methods have been proposed, such as C/N0, using three-dimensional maps, fish-eye view, and GNSS/inertial navigation system integration. Multipath detection based on machine learning has also been reported in recent years. In this study, we propose a method for detecting NLOS signals using a support vector machine (SVM) classifier modeled with unique features that are calculated by receiver independent exchange format-based information and GNSS pseudorange residual check. We found that using both the SVM classifier and GNSS pseudorange residual check effectively reduced the error due to NLOS signals. Several static tests were conducted near high-rise buildings that are likely to receive some NLOS signals in downtown Tokyo. For all static tests, the percentage of positioning errors within 10 m in the horizontal positioning error was improved by >80% by detecting and eliminating satellites receiving NLOS signals.

Within the last decade, soft robotics has attracted an increasing attention from both academia and industry. Although multiple literature reviews of the whole soft robotics field have been conducted, there still appears to be a lack of systematic investigation of the intellectual structure and evolution of this field considering the increasing amount of publications. This paper conducts a scientometric review of the progressively synthesized network derived from 10,504 bibliographic records using a topic search on soft robotics from 2010 to 2021 based on the Web of Science (WoS) core database. The results are presented from both the general data analysis of included papers (e.g., relevant journals, citation, h-index, year, institution, country, disciplines) and the specific data analysis corresponding to main disciplines and topics, and more importantly, emerging trends. CiteSpace, a data visualization software, which can construct the co-citation network maps and provide citation bursts, is used to explore the intellectual structures and emerging trends of the soft robotics field. In addition, this paper offers a demonstration of an effective analytical method for evaluating enormous publication citation and co-citation data. Findings of this review can be used as a reference for future research in soft robotics and relevant topics.



This is a sponsored article brought to you by SICK Inc..

From advanced manufacturing to automated vehicles, engineers are using LiDAR to change the world as we know it. For the second year, students from across the country submitted projects to SICK's annual TiM$10K Challenge.


The first place team during the 2020 TiM$10K Challenge hails from Worcester Polytechnic Institute (WPI) in Worcester, Mass. The team comprised of undergraduate seniors, Daniel Pelaez and Noah Budris, and undergraduate junior, Noah Parker.

With the help of their academic advisor, Dr. Alexander Wyglinski, Professor of Electrical Engineering and Robotics Engineering at WPI, the team took first place in the 2020 TiM$10K Challenge with their project titled ROADGNAR, a mobile and autonomous pavement quality data collection system.

So what is the TiM$10K Challenge?

In this challenge, SICK reached out to universities across the nation that were looking to support innovation and student achievement in automation and technology. Participating teams were supplied with a SICK 270° LiDAR, a TiM, and accessories. They were challenged to solve a problem, create a solution, and bring a new application that utilizes the SICK scanner in any industry.

Around the United States, many of the nation's roadways are in poor condition, most often from potholes and cracks in the pavement, which can make driving difficult. Many local governments agree that infrastructure is in need of repair, but with a lack of high-quality data, inconsistencies in damage reporting, and an overall lack of adequate prioritization, this is a difficult problem to solve.



Pelaez, Parker, and Budris first came up with the idea of ROADGNAR before they had even learned of the TiM$10K Challenge. They noticed that the roads in their New England area were in poor condition, and wanted to see if there was a way to help solve the way road maintenance is performed.

In their research, they learned that many local governments use outdated and manual processes. Many send out workers to check for poor road conditions, who then log the information in notebooks.

The team began working on a solution to help solve this problem. It was at a career fair that Pelaez met a SICK representative, who encouraged him to apply to the TiM$10K Challenge.

Win $10K and a Trip to Germany!

SICK is excited to announce the 2022-2023 edition of the SICK TiM$10K Challenge. Twenty teams will be selected to participate in the challenge, and the chosen teams will be supplied with a 270º SICK LiDAR sensor (TiM) and accessories. The teams will be challenged to solve a problem, create a solution, bring a new application that utilizes the SICK LiDAR in any industry. This can be part of the curriculum of a senior design project or capstone projects for students.

Awards:

The 3 winning teams will win a cash award of

• 1st Place - $10K
• 2nd Place - $5K
• 3rd place - $3K

In addition to bragging rights and the cash prize, the 1st place winning team, along with the advising professor, will be offered an all-expenses-paid trip to SICK Germany to visit the SICK headquarters and manufacturing facility!

Registration is now open for the academic year 2022-2023!


Using SICK's LiDAR technology, the ROADGNAR takes a 3D scan of the road and the data is then used to determine the exact level of repair needed.

ROADGNAR collects detailed data on the surface of any roadway, while still allowing for easy integration onto any vehicle. With this automated system, road maintenance can become a faster, more reliable, and more efficient process for towns and cities around the country.

ROADGNAR solves this problem through two avenues: hardware and software. The team designed two mounting brackets to connect the system to a vehicle. The first, located in the back of the vehicle, supports a LiDAR scanner. The second is fixed in line with the vehicle's axle and supports a wheel encoder, which is wired to the fuse box.

"It definitely took us a while to figure out a way to power ROADGNAR so we wouldn't have to worry about it shutting off while the car was in motion," said Parker.

Also wired to the fuse box is a GPS module within the vehicle itself. Data transfer wires are attached to these three systems and connected to a central processing unit within the vehicle.

Using LiDAR to collect road data

When the car is started, all connected devices turn on. The LiDAR scanner collects road surface data, the wheel encoder tracks an accurate measurement of the distance travelled by the vehicle, and the GPS generates geo-tags on a constant basis. All this data is stored in the onboard database, where a monitor presents it all to the user. The data is then stored in a hard drive.

Much like the roads in their Massachusetts town, the creation process of ROADGNAR was not without its challenges. The biggest problem took the form of the COVID-19 pandemic, which hit the ROADGNAR team in the middle of development. Once WPI closed to encourage its students and faculty to practice social distancing, the team was without a base of operations.

"When the coronavirus closed our school, we were lucky enough to live pretty close to each other," said Paleaz. "We took precautions, but were able to come together to test and power through to finish our project."



Integrating LiDAR into the car was also a challenge. Occasionally, the LiDAR would shut off when the car began moving. The team had to take several measures to keep the sensor online, often contacting SICK's help center for instruction.

"One of the major challenges was making sure we were getting enough data on a given road surface," said Budris. "At first we were worried that we wouldn't get enough data from the sensor to make ROADGNAR feasible, but we figured that if we drove at a slow and constant rate, we'd be able to get accurate scans."

With the challenge complete, Pelaez, Budris, and Parker are looking to turn ROADGNAR into a genuine product. They have already contacted an experienced business partner to help them determine their next steps.



They are now interviewing with representatives from various Department of Public Works throughout Massachusetts and Connecticut. Thirteen municipalities have indicated that they would be extremely interested in utilizing ROADGNAR, as it would drastically reduce the time needed to assess all the roads in the area. The trio is excited to see how different LiDAR sensors can help refine ROADGNAR into a viable product.

"We'd like to keep the connection going," explained Pelaez. "If we can keep the door open for a potential partnership between us and SICK, that'd be great."

SICK is now accepting entries for the TiM$10K Challenge for the 2022-2023 school year!

Student teams are encouraged to use their creativity and technical knowledge to incorporate the SICK LiDAR for any industry in any application. Advisors/professors are allowed to guide the student teams as required.






This is a sponsored article brought to you by SICK Inc..

From advanced manufacturing to automated vehicles, engineers are using LiDAR to change the world as we know it. For the second year, students from across the country submitted projects to SICK's annual TiM$10K Challenge.


The first place team during the 2020 TiM$10K Challenge hails from Worcester Polytechnic Institute (WPI) in Worcester, Mass. The team comprised of undergraduate seniors, Daniel Pelaez and Noah Budris, and undergraduate junior, Noah Parker.

With the help of their academic advisor, Dr. Alexander Wyglinski, Professor of Electrical Engineering and Robotics Engineering at WPI, the team took first place in the 2020 TiM$10K Challenge with their project titled ROADGNAR, a mobile and autonomous pavement quality data collection system.

So what is the TiM$10K Challenge?

In this challenge, SICK reached out to universities across the nation that were looking to support innovation and student achievement in automation and technology. Participating teams were supplied with a SICK 270° LiDAR, a TiM, and accessories. They were challenged to solve a problem, create a solution, and bring a new application that utilizes the SICK scanner in any industry.

Around the United States, many of the nation's roadways are in poor condition, most often from potholes and cracks in the pavement, which can make driving difficult. Many local governments agree that infrastructure is in need of repair, but with a lack of high-quality data, inconsistencies in damage reporting, and an overall lack of adequate prioritization, this is a difficult problem to solve.



Pelaez, Parker, and Budris first came up with the idea of ROADGNAR before they had even learned of the TiM$10K Challenge. They noticed that the roads in their New England area were in poor condition, and wanted to see if there was a way to help solve the way road maintenance is performed.

In their research, they learned that many local governments use outdated and manual processes. Many send out workers to check for poor road conditions, who then log the information in notebooks.

The team began working on a solution to help solve this problem. It was at a career fair that Pelaez met a SICK representative, who encouraged him to apply to the TiM$10K Challenge.

Win $10K and a Trip to Germany!

SICK is excited to announce the 2022-2023 edition of the SICK TiM$10K Challenge. Twenty teams will be selected to participate in the challenge, and the chosen teams will be supplied with a 270º SICK LiDAR sensor (TiM) and accessories. The teams will be challenged to solve a problem, create a solution, bring a new application that utilizes the SICK LiDAR in any industry. This can be part of the curriculum of a senior design project or capstone projects for students.

Awards:

The 3 winning teams will win a cash award of

• 1st Place - $10K
• 2nd Place - $5K
• 3rd place - $3K

In addition to bragging rights and the cash prize, the 1st place winning team, along with the advising professor, will be offered an all-expenses-paid trip to SICK Germany to visit the SICK headquarters and manufacturing facility!

Registration is now open for the academic year 2022-2023!


Using SICK's LiDAR technology, the ROADGNAR takes a 3D scan of the road and the data is then used to determine the exact level of repair needed.

ROADGNAR collects detailed data on the surface of any roadway, while still allowing for easy integration onto any vehicle. With this automated system, road maintenance can become a faster, more reliable, and more efficient process for towns and cities around the country.

ROADGNAR solves this problem through two avenues: hardware and software. The team designed two mounting brackets to connect the system to a vehicle. The first, located in the back of the vehicle, supports a LiDAR scanner. The second is fixed in line with the vehicle's axle and supports a wheel encoder, which is wired to the fuse box.

"It definitely took us a while to figure out a way to power ROADGNAR so we wouldn't have to worry about it shutting off while the car was in motion," said Parker.

Also wired to the fuse box is a GPS module within the vehicle itself. Data transfer wires are attached to these three systems and connected to a central processing unit within the vehicle.

Using LiDAR to collect road data

When the car is started, all connected devices turn on. The LiDAR scanner collects road surface data, the wheel encoder tracks an accurate measurement of the distance travelled by the vehicle, and the GPS generates geo-tags on a constant basis. All this data is stored in the onboard database, where a monitor presents it all to the user. The data is then stored in a hard drive.

Much like the roads in their Massachusetts town, the creation process of ROADGNAR was not without its challenges. The biggest problem took the form of the COVID-19 pandemic, which hit the ROADGNAR team in the middle of development. Once WPI closed to encourage its students and faculty to practice social distancing, the team was without a base of operations.

"When the coronavirus closed our school, we were lucky enough to live pretty close to each other," said Paleaz. "We took precautions, but were able to come together to test and power through to finish our project."



Integrating LiDAR into the car was also a challenge. Occasionally, the LiDAR would shut off when the car began moving. The team had to take several measures to keep the sensor online, often contacting SICK's help center for instruction.

"One of the major challenges was making sure we were getting enough data on a given road surface," said Budris. "At first we were worried that we wouldn't get enough data from the sensor to make ROADGNAR feasible, but we figured that if we drove at a slow and constant rate, we'd be able to get accurate scans."

With the challenge complete, Pelaez, Budris, and Parker are looking to turn ROADGNAR into a genuine product. They have already contacted an experienced business partner to help them determine their next steps.



They are now interviewing with representatives from various Department of Public Works throughout Massachusetts and Connecticut. Thirteen municipalities have indicated that they would be extremely interested in utilizing ROADGNAR, as it would drastically reduce the time needed to assess all the roads in the area. The trio is excited to see how different LiDAR sensors can help refine ROADGNAR into a viable product.

"We'd like to keep the connection going," explained Pelaez. "If we can keep the door open for a potential partnership between us and SICK, that'd be great."

SICK is now accepting entries for the TiM$10K Challenge for the 2022-2023 school year!

Student teams are encouraged to use their creativity and technical knowledge to incorporate the SICK LiDAR for any industry in any application. Advisors/professors are allowed to guide the student teams as required.






Today, Clearpath Robotics is opening pre-orders for the newest, fanciest TurtleBot: the TurtleBot 4. Built on top of iRobot’s Create 3 in close partnership with Open Robotics, the TurtleBot 4 is “the next-generation of the world’s most popular open-source robotics platform for education and research, offering superior computing power, more payload capacity, improved sensors, and a world class user experience at an affordable price.” Couldn’t have said it better myself, no matter how many times I've tried.

TurtleBot 4’s big differentiator is that it’s designed to showcase ROS 2, the powerful open source Robotic Operating System that is working hard to successfully transition from robotics research into an all-purpose framework that can safely and reliably power commercial robots as well. This is the first version of the TurtleBot to run ROS 2 from the ground up (including the Create 3 base), and offers an opportunity for anyone from precious middle schooler on up to learn ROS 2 in a safe and well supported way, on real hardware that is affordable(ish).


Clearpath Robotics

There will be two versions of the TurtleBot 4 available for pre-order from Clearpath, starting today. Both versions use the iRobot Create 3 development platform (read more about that here) as a mobility base, with the same power and charging system including a base station. Both also include a 2D RPLIDAR-A1 sensor with a 0.15m to 12m range. Compute comes in the form of a Raspberry Pi 4B running Ubuntu 20.04 with ROS 2 already installed.

From there, the TurtleBot 4 Standard splits off from the TurtleBot 4 Lite. The Lite version misses out on some additional options for user accessible power, as well as useful interfaces including extra LEDs, some physical buttons, and a small OLED display that by default shows the robot’s IP address (or whatever else you want). This is especially neat because it makes it easy to fire the robot up and launch a demo behavior without requiring an external computer. The other big difference is in the sensor: the Lite includes an OAK-D-Lite camera and stereo depth sensor, while the TurtleBot 4 Standard comes with a more capable OAK-D-Pro.

The cost of the TurtleBot 4 Lite is USD $1,095, while the TurtleBot 4 Standard is USD $1,750. Pre-orders will be available starting today through Clearpath distributors in North America, Europe, and Asia, and shipping will begin in July. This is certainly a premium over what you'd pay for all of the parts individually, and you can certainly build yourself a TurtleBot 4 mostly from scratch if you want to. But unless you have a specific interest in that process, there's a lot of value in getting a robot that is ready to go right out of the box.

Clearpath Robotics

Using the Create 3 as a base gives the TurtleBot 4 both the ruggedness of a Roomba and a bunch of useful integrated sensors—the same ones that Roombas use to reliably navigate your house and not fall down your stairs. The Create 3’s battery gives the TurtleBot 4 an impressive minimum battery life of 2.5 hours, and all of the parts are easy to fix or replace since you’ve got access to iRobot’s supply chain. Top speed is nearly half a meter per second, or slightly slower if you don’t disable the cliff sensors.

If any of this doesn’t satisfy your needs, part of the point of the TurtleBot platform is that it’s super easy to expand, as long as you know what you’re doing (or are willing to learn). Power and communications ports are easy to access, and the TurtleBot 4 has lots of easy ways to mount up to 9 kilograms of hardware.

Clearpath Robotics

Historically, TurtleBots have been very popular in educational contexts due to their affordable versatility and (at least in part) to the community support behind them and behind ROS more broadly. They’re great platforms for getting started with ROS (now ROS 2) on your own, or with other students. No matter what problem you run into, odds are someone has already had the same one and solved it and you can find it on the ROS Answers message board. But hopefully you won’t need to do that from the start: TurtleBot 4 will ship fully assembled, with all necessary software pre-installed and configured, and you’ll have detailed user documentation plus demo code and a bunch of tutorials. There’s also a Ignition Gazebo simulation model to play with, which you can access without even buying a TurtleBot 4 at all, as it's completely free. This should be especially useful for classrooms, where multiple students could work in simulation before trying things out on the real robot.

Clearpath Robotics

To get more details on the TurtleBot 4, we talked with:

  • Bryan Webb, President of Clearpath Robotics
  • Steve Shamlian, Principal Software Engineer at iRobot
  • Katherine Scott, Developer Advocate at Open Robotics
  • Tully Foote, ROS Platform Manager at Open Robotics

IEEE Spectrum: Why is now the right time for a TurtleBot 4?

Katherine Scott, Open Robotics: I think there was always a rough idea that we wanted to get a new TurtleBot out around Foxy. Foxy is fairly well baked, and we wanted to give people a way to learn ROS 2, and especially for new people coming into the community, they’d have a way to start with ROS 2—that was a big motivator.

Tully Foote, OSRF: It was the beginning of 2021, and basically, we went to Clearpath and started talking about our vision for the TurtleBot 4, and how we wanted to bring it back more along the lines of the TurtleBot 2. We’d found that while the TurtleBot 3 has been awesome as a smaller and cheaper platform, the TurtleBot 2 had hit a sweet spot in size where it could carry things and go over things and be more of a ground robot as opposed to a desk robot. We had some knowledge of what was going on at iRobot with the Create 3, which runs ROS 2, so with that we’re building a ROS 2 robot on top of a ROS 2 base.

“Because it’s got the Raspberry Pi on it, it’s extensible. ... Certainly if you’re creative enough, I could picture taking this robot all the way through at least their masters, and then possibly starting a Ph.D with it.”
—Bryan Webb, Clearpath Robotics

Bryan Webb, Clearpath: We thought that the primary ingredients for TurtleBot had really progressed over the last few years, so we could offer a much better development platform than was currently available, and support the community with the latest tools. So we were chatting about it, and it just seemed like there was a lot more that could be done to offer a higher capability robot in that entry level space.

As iRobot was thinking about making a Create 3, at what point did you decide that it could or should be part of the TurtleBot 4?

Steve Shamlian, iRobot: We’re all roboticists here at iRobot. We have a lot of love for the TurtleBot. Especially after seeing how the original Create drove the adoption of ROS—when we saw that ROS 2 was in a place where it needed a TurtleBot, we were really excited to try to help. We really want to help make more makers and more hackers, that’s what this is about.

How customized is the Create 3 for the TurtleBot 4 platform?

Steve Shamlian, iRobot: The Create 3 is an iRobot product that we’re very proud of. We were going to do it whether or not it was going to be a part of the TurtleBot 4. The timing worked out, and I feel very happy that it did. And we definitely talked about things that would be important for TurtleBot, and whether there were design affordances that we could make, but honestly that didn’t change the design very much from what we were going to do versus the things that were requested for the TurtleBot. I think we know the community well enough that we had a good idea of what we thought they would like, so it felt really good to see those things match up so well with what was needed for the TurtleBot 4.

One thing that I always appreciated about the TurtleBot 2 was that it came with a netbook on it that made programming and debugging really easy. That’s something I could see myself missing on the TurtleBot 4.

Bryan Webb, Clearpath: Not bundling the TurtleBot 4 with a netbook is partially reflective of the maturity of ROS, but that may be secondary to the supply chain constraints that we’re living with these days. Netbooks are not really available in the way they once were, and even back when I was intimately involved with the TurtleBot 2, it was always a struggle to find netbooks of the quantity that we needed. That was a big challenge in maintaining the product. So, taking that into consideration, coupled with the amount that ROS has matured, we thought that a good compromise was to make it really easy to hook up to a desktop.

Katherine Scott, Open Robotics: When the TurtleBot 2 was built, most single-board computers were fairly nascent. We put a laptop on there because that’s what was powerful enough to run the robot. One big thing for me was to at least get some minimum viable interface on the TurtleBot 4—a screen and some buttons so that you can at least see the IP right there and SSH into the robot within a minute of turning it on. We’ll be focusing a lot on the user experience here, and making it easy to use.

“We’re really looking to have something that offers a lot to novice, intermediate, and expert users of ROS.”
—Bryan Webb, Clearpath Robotics

How easy will it be to get started with TurtleBot 4, especially for beginners?

Bryan Webb, Clearpath: We’re going to have at least one formal educational course based around the TurtleBot 4. At this point, there’s going to be at least one, and we have eyes towards other opportunities to extend that.

Katherine Scott, Open Robotics: We've had a lot of discussions with academics and other people along the way, trying to figure out what's going to work—you know, do we have to do courseware, or do we just provide the content, and what’s it going to look like?

With TurtleBot 4, we leaned into the simulation side of it a little bit more than we usually would. In a classroom setting, the feedback that we get a lot is that robots are really exciting, but they’re expensive. Classroom robots have always been expensive. So if we can do everything with simulation and then every classroom has two or three robots, I think it’s going to be a better way to do things going forward.

Tully Foote, OSRF: And part of this is also we're going to be working hard to put together courseware and materials to be able to teach in the classroom, for a fully integrated experience. We’re hoping to have someone from academia writing real content for this, rather than asking a silicon valley engineer to do it. We want to get someone who knows what they’re talking about. The scope will be an introduction to robotics, so it may be starting not far beyond turning your computer on, but the goal will be to get to a college-ish level. And once we get a body of work there, we’d love to push it down to make it more accessible to middle and high school students, and also add more advanced things for graduate level.

How far can TurtleBot 4 take you in robotics?

Bryan Webb, Clearpath: There’s a lot of potential with the TurtleBot 4. Because it’s got the Raspberry Pi on it, it’s extensible. You can put on new sensors for different kinds of research, and build on top of it both physically and through software development. Certainly if you’re creative enough, I could picture taking this robot all the way through at least their masters, and then possibly starting a Ph.D with it.

Tully Foote, OSRF: I’d like to think that the TurtleBot 4, as a platform, is capable enough to take you through grad school if you’re doing straight robotics. If you want to work on multi-robot coordination, it has all the basics. And you should be able to add an arm onto it, and other things like that. But it’s always going to be an entry level robot. If you want to do mobile manipulation, TurtleBot can get you started, but you’re going to want to upgrade to a bigger, stronger platform. It’s really that entry-level robot for before you specialize.

Katherine Scott, Open Robotics: It’s also a good platform for when you’re starting a company. It’s a good platform for getting halfway there, before you can get to where you’re going. As an abstract mobile base, you can build proof of concept ideas, and when you’re ready, move up. The thing I’m excited about, if we do things right, a year from now we’ll see people extending the TurtleBot 4 with new hardware and capabilities.



Today, Clearpath Robotics is opening pre-orders for the newest, fanciest TurtleBot: the TurtleBot 4. Built on top of iRobot’s Create 3 in close partnership with Open Robotics, the TurtleBot 4 is “the next-generation of the world’s most popular open-source robotics platform for education and research, offering superior computing power, more payload capacity, improved sensors, and a world class user experience at an affordable price.” Couldn’t have said it better myself, no matter how many times I've tried.

TurtleBot 4’s big differentiator is that it’s designed to showcase ROS 2, the powerful open source Robotic Operating System that is working hard to successfully transition from robotics research into an all-purpose framework that can safely and reliably power commercial robots as well. This is the first version of the TurtleBot to run ROS 2 from the ground up (including the Create 3 base), and offers an opportunity for anyone from precious middle schooler on up to learn ROS 2 in a safe and well supported way, on real hardware that is affordable(ish).


Clearpath Robotics

There will be two versions of the TurtleBot 4 available for pre-order from Clearpath, starting today. Both versions use the iRobot Create 3 development platform (read more about that here) as a mobility base, with the same power and charging system including a base station. Both also include a 2D RPLIDAR-A1 sensor with a 0.15m to 12m range. Compute comes in the form of a Raspberry Pi 4B running Ubuntu 20.04 with ROS 2 already installed.

From there, the TurtleBot 4 Standard splits off from the TurtleBot 4 Lite. The Lite version misses out on some additional options for user accessible power, as well as useful interfaces including extra LEDs, some physical buttons, and a small OLED display that by default shows the robot’s IP address (or whatever else you want). This is especially neat because it makes it easy to fire the robot up and launch a demo behavior without requiring an external computer. The other big difference is in the sensor: the Lite includes an OAK-D-Lite camera and stereo depth sensor, while the TurtleBot 4 Standard comes with a more capable OAK-D-Pro.

The cost of the TurtleBot 4 Lite is USD $1,095, while the TurtleBot 4 Standard is USD $1,750. Pre-orders will be available starting today through Clearpath distributors in North America, Europe, and Asia, and shipping will begin in July. This is certainly a premium over what you'd pay for all of the parts individually, and you can certainly build yourself a TurtleBot 4 mostly from scratch if you want to. But unless you have a specific interest in that process, there's a lot of value in getting a robot that is ready to go right out of the box.

Clearpath Robotics

Using the Create 3 as a base gives the TurtleBot 4 both the ruggedness of a Roomba and a bunch of useful integrated sensors—the same ones that Roombas use to reliably navigate your house and not fall down your stairs. The Create 3’s battery gives the TurtleBot 4 an impressive minimum battery life of 2.5 hours, and all of the parts are easy to fix or replace since you’ve got access to iRobot’s supply chain. Top speed is nearly half a meter per second, or slightly slower if you don’t disable the cliff sensors.

If any of this doesn’t satisfy your needs, part of the point of the TurtleBot platform is that it’s super easy to expand, as long as you know what you’re doing (or are willing to learn). Power and communications ports are easy to access, and the TurtleBot 4 has lots of easy ways to mount up to 9 kilograms of hardware.

Clearpath Robotics

Historically, TurtleBots have been very popular in educational contexts due to their affordable versatility and (at least in part) to the community support behind them and behind ROS more broadly. They’re great platforms for getting started with ROS (now ROS 2) on your own, or with other students. No matter what problem you run into, odds are someone has already had the same one and solved it and you can find it on the ROS Answers message board. But hopefully you won’t need to do that from the start: TurtleBot 4 will ship fully assembled, with all necessary software pre-installed and configured, and you’ll have detailed user documentation plus demo code and a bunch of tutorials. There’s also a Ignition Gazebo simulation model to play with, which you can access without even buying a TurtleBot 4 at all, as it's completely free. This should be especially useful for classrooms, where multiple students could work in simulation before trying things out on the real robot.

Clearpath Robotics

To get more details on the TurtleBot 4, we talked with:

  • Bryan Webb, President of Clearpath Robotics
  • Steve Shamlian, Principal Software Engineer at iRobot
  • Katherine Scott, Developer Advocate at Open Robotics
  • Tully Foote, ROS Platform Manager at Open Robotics

IEEE Spectrum: Why is now the right time for a TurtleBot 4?

Katherine Scott, Open Robotics: I think there was always a rough idea that we wanted to get a new TurtleBot out around Foxy. Foxy is fairly well baked, and we wanted to give people a way to learn ROS 2, and especially for new people coming into the community, they’d have a way to start with ROS 2—that was a big motivator.

Tully Foote, OSRF: It was the beginning of 2021, and basically, we went to Clearpath and started talking about our vision for the TurtleBot 4, and how we wanted to bring it back more along the lines of the TurtleBot 2. We’d found that while the TurtleBot 3 has been awesome as a smaller and cheaper platform, the TurtleBot 2 had hit a sweet spot in size where it could carry things and go over things and be more of a ground robot as opposed to a desk robot. We had some knowledge of what was going on at iRobot with the Create 3, which runs ROS 2, so with that we’re building a ROS 2 robot on top of a ROS 2 base.

“Because it’s got the Raspberry Pi on it, it’s extensible. ... Certainly if you’re creative enough, I could picture taking this robot all the way through at least their masters, and then possibly starting a Ph.D with it.”
—Bryan Webb, Clearpath Robotics

Bryan Webb, Clearpath: We thought that the primary ingredients for TurtleBot had really progressed over the last few years, so we could offer a much better development platform than was currently available, and support the community with the latest tools. So we were chatting about it, and it just seemed like there was a lot more that could be done to offer a higher capability robot in that entry level space.

As iRobot was thinking about making a Create 3, at what point did you decide that it could or should be part of the TurtleBot 4?

Steve Shamlian, iRobot: We’re all roboticists here at iRobot. We have a lot of love for the TurtleBot. Especially after seeing how the original Create drove the adoption of ROS—when we saw that ROS 2 was in a place where it needed a TurtleBot, we were really excited to try to help. We really want to help make more makers and more hackers, that’s what this is about.

How customized is the Create 3 for the TurtleBot 4 platform?

Steve Shamlian, iRobot: The Create 3 is an iRobot product that we’re very proud of. We were going to do it whether or not it was going to be a part of the TurtleBot 4. The timing worked out, and I feel very happy that it did. And we definitely talked about things that would be important for TurtleBot, and whether there were design affordances that we could make, but honestly that didn’t change the design very much from what we were going to do versus the things that were requested for the TurtleBot. I think we know the community well enough that we had a good idea of what we thought they would like, so it felt really good to see those things match up so well with what was needed for the TurtleBot 4.

One thing that I always appreciated about the TurtleBot 2 was that it came with a netbook on it that made programming and debugging really easy. That’s something I could see myself missing on the TurtleBot 4.

Bryan Webb, Clearpath: Not bundling the TurtleBot 4 with a netbook is partially reflective of the maturity of ROS, but that may be secondary to the supply chain constraints that we’re living with these days. Netbooks are not really available in the way they once were, and even back when I was intimately involved with the TurtleBot 2, it was always a struggle to find netbooks of the quantity that we needed. That was a big challenge in maintaining the product. So, taking that into consideration, coupled with the amount that ROS has matured, we thought that a good compromise was to make it really easy to hook up to a desktop.

Katherine Scott, Open Robotics: When the TurtleBot 2 was built, most single-board computers were fairly nascent. We put a laptop on there because that’s what was powerful enough to run the robot. One big thing for me was to at least get some minimum viable interface on the TurtleBot 4—a screen and some buttons so that you can at least see the IP right there and SSH into the robot within a minute of turning it on. We’ll be focusing a lot on the user experience here, and making it easy to use.

“We’re really looking to have something that offers a lot to novice, intermediate, and expert users of ROS.”
—Bryan Webb, Clearpath Robotics

How easy will it be to get started with TurtleBot 4, especially for beginners?

Bryan Webb, Clearpath: We’re going to have at least one formal educational course based around the TurtleBot 4. At this point, there’s going to be at least one, and we have eyes towards other opportunities to extend that.

Katherine Scott, Open Robotics: We've had a lot of discussions with academics and other people along the way, trying to figure out what's going to work—you know, do we have to do courseware, or do we just provide the content, and what’s it going to look like?

With TurtleBot 4, we leaned into the simulation side of it a little bit more than we usually would. In a classroom setting, the feedback that we get a lot is that robots are really exciting, but they’re expensive. Classroom robots have always been expensive. So if we can do everything with simulation and then every classroom has two or three robots, I think it’s going to be a better way to do things going forward.

Tully Foote, OSRF: And part of this is also we're going to be working hard to put together courseware and materials to be able to teach in the classroom, for a fully integrated experience. We’re hoping to have someone from academia writing real content for this, rather than asking a silicon valley engineer to do it. We want to get someone who knows what they’re talking about. The scope will be an introduction to robotics, so it may be starting not far beyond turning your computer on, but the goal will be to get to a college-ish level. And once we get a body of work there, we’d love to push it down to make it more accessible to middle and high school students, and also add more advanced things for graduate level.

How far can TurtleBot 4 take you in robotics?

Bryan Webb, Clearpath: There’s a lot of potential with the TurtleBot 4. Because it’s got the Raspberry Pi on it, it’s extensible. You can put on new sensors for different kinds of research, and build on top of it both physically and through software development. Certainly if you’re creative enough, I could picture taking this robot all the way through at least their masters, and then possibly starting a Ph.D with it.

Tully Foote, OSRF: I’d like to think that the TurtleBot 4, as a platform, is capable enough to take you through grad school if you’re doing straight robotics. If you want to work on multi-robot coordination, it has all the basics. And you should be able to add an arm onto it, and other things like that. But it’s always going to be an entry level robot. If you want to do mobile manipulation, TurtleBot can get you started, but you’re going to want to upgrade to a bigger, stronger platform. It’s really that entry-level robot for before you specialize.

Katherine Scott, Open Robotics: It’s also a good platform for when you’re starting a company. It’s a good platform for getting halfway there, before you can get to where you’re going. As an abstract mobile base, you can build proof of concept ideas, and when you’re ready, move up. The thing I’m excited about, if we do things right, a year from now we’ll see people extending the TurtleBot 4 with new hardware and capabilities.

Chefs frequently rely on their taste to assess the content and flavor of dishes during cooking. While tasting the food, the mastication process also provides continuous feedback by exposing the taste receptors to food at various stages of chewing. Since different ingredients of the dish undergo specific changes during chewing, the mastication helps to understand the food content. The current methods of electronic tasting, on the contrary, always use a single taste snapshot of a homogenized sample. We propose a robotic setup that uses the mixing to imitate mastication and tastes the dish at two different mastication phases. Each tasting is done using a conductance probe measuring conductance at multiple, spatially distributed points. This data is used to classify 9 varieties of scrambled eggs with tomatoes. We test four different tasting methods and analyze the resulting classification performance, showing a significant improvement over tasting homogenized samples. The experimental results show that tasting at two states of mechanical processing of the food increased classification F1 score to 0.93 in comparison to the traditional tasting of a homogenized sample resulting in F1 score of 0.55. We attribute this performance increase to the fact that different dishes are affected differently by the mixing process, and have different spatial distributions of the salinity. It helps the robot to distinguish between dishes of the same average salinity, but different content of ingredients. This work demonstrates that mastication plays an important role in robotic tasting and implementing it can improve the tasting ability of robotic chefs.

The locomotion of soft snake robots is dependent on frictional interactions with the environment. Frictional anisotropy is a morphological characteristic of snakeskin that allows snakes to engage selectively with surfaces and generate propulsive forces. The prototypical slithering gait of most snakes is lateral undulation, which requires a significant lateral resistance that is lacking in artificial skins of existing soft snake robots. We designed a set of kirigami lattices with curvilinearly-arranged cuts to take advantage of in-plane rotations of the 3D structures when wrapped around a soft bending actuator. By changing the initial orientation of the scales, the kirigami skin produces high lateral friction upon engagement with surface asperities, with lateral to cranial anisotropic friction ratios above 4. The proposed design increased the overall velocity of the soft snake robot more than fivefold compared to robots without skin.

For robots that can provide physical assistance, maintaining synchronicity of the robot and human movement is a precursor for interaction safety. Existing research on collaborative HRI does not consider how synchronicity can be affected if humans are subjected to cognitive overloading and distractions during close physical interaction. Cognitive neuroscience has shown that unexpected events during interactions not only affect action cognition but also human motor control Gentsch et al. (Cognition, 2016, 146, 81–89). If the robot is to safely adapt its trajectory to distracted human motion, quantitative changes in the human movement should be evaluated. The main contribution of this study is the analysis and quantification of disrupted human movement during a physical collaborative task that involves robot-assisted dressing. Quantifying disrupted movement is the first step in maintaining the synchronicity of the human-robot interaction. The human movement data collected from a series of experiments where participants are subjected to cognitive loading and distractions during the human-robot interaction, are projected in a 2-D latent space that efficiently represents the high-dimensionality and non-linearity of the data. The quantitative data analysis is supported by a qualitative study of user experience, using the NASA Task Load Index to measure perceived workload, and the PeRDITA questionnaire to represent the human psychological state during these interactions. In addition, we present an experimental methodology to collect interaction data in this type of human-robot collaboration that provides realism, experimental rigour and high fidelity of the human-robot interaction in the scenarios.

Lower limb exoskeletons are widely used for rehabilitation training of patients suffering from neurological disorders. To improve the human–robot interaction performance, series elastic actuators (SEAs) with low output impedance have been developed. However, the adaptability and control performance are limited by the constant spring stiffness used in current SEAs. In this study, a novel load-adaptive variable stiffness actuator (LaVSA) is used to design an ankle exoskeleton. To overcome the problems of the LaVSA with a larger mechanical gap and more complex dynamic model, a sliding mode controller based on a disturbance observer is proposed. During the interaction process, due to the passive joints at the load side of the ankle exoskeleton, the dynamic parameters on the load side of the ankle exoskeleton will change continuously. To avoid this problem, the designed controller treats it and the model error as a disturbance and observes it with the disturbance observer (DOB) in real time. The first-order derivative of the disturbance set is treated as a bounded value. Subsequently, the parameter adaptive law is used to find the upper bound of the observation error and make corresponding compensation in the control law. On these bases, a sliding mode controller based on a disturbance observer is designed, and Lyapunov stability analysis is given. Finally, simulation and experimental verification are performed. The wearing experiment shows that the resistance torque suffered by humans under human–robot interaction is lower than 120 Nmm, which confirms that the controller can realize zero-impedance control of the designed ankle exoskeleton.



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2022: 23–27 May 2022, PHILADELPHIAIEEE ARSO 2022: 28–30 May 2022, LONG BEACH, CALIF.RSS 2022: 21–1 June 2022, NEW YORK CITYERF 2022: 28–30 June 2022, ROTTERDAM, NETHERLANDSRoboCup 2022: 11–17 July 2022, BANGKOKIEEE CASE 2022: 20–24 August 2022, MEXICO CITYCLAWAR 2022: 12–14 September 2022, AZORES, PORTUGAL

Enjoy today's videos!

I'm not sure it's geographically appropriate for a Husky robot to be this close to penguins in Antarctica, but on the other hand, who cares, because I am all in for robots and penguins.

The project consists of a hybrid (autonomous and remote-controlled) Husky UGV-based robot named ECHO that carries a variety of sensors including a camera and an RFID antenna to read RFID tags from chipped penguins (the kind of chips that are also used to chip dogs and cats). With the RFID scanner, ECHO will scan penguins to assess their breeding status and survival success. Overall, the robot will be able to track individual penguins throughout their lifetimes, allowing researchers to gather data for behavioral and population dynamics research.

[ Clearpath ]

Snap has launched a little camera drone called Pixy. It's nothing special, but that's fine: it looks to be small, safe, and quite easy to use. And I really appreciate that this video seems to show actual footage from the drone, which is not fantastic, but totally workable.

$250 seems a bit steep, but perhaps the safe form factor and ease of use could make it worthwhile.

[ Pixy ]

This is pretty awesome- it's a RoboCup standard platform event where the robots are operating fully autonomously. Watch right after kick-off as the robot in the black jersey (closest to the ball) books it off screen to the left. As it turns out, she (her name is Sarah) went deep into the opponent's half, where she camped out by the goal, in a perfect position to recieve a brillant pass.

[ B-Human ]

GITAI has already demonstrated its robotic arm inside of the International Space Station, and now it look like they're getting ready to work outside of the station as well.

[ GITAI ]

Things that I want robots to do so that I don't have to: Waste sorting.

Weird to have them call the robot both "she" and "unmanned" in practically the same sentence.

[ ZenRobotics ]

At Agility, we make robots that are made for work. Our expertise is marrying design, software, and hardware to build robots that are capable of doing limitless tasks as part of a blended human-robot workforce.

OK, I really want to know if Digit can use that step stool at the back of the trailer.

[ Agility ]

Zimbabwe Flying Labs' Tawanda Chihambakwe shares how Zimbabwe Flying Labs started using drones for STEM programs and how drones impact conservation and agriculture.

[ ZFL ]

Robotics has the potential to revolutionize our daily lives, enabling humans to do things never thought possible. SRI is at the forefront of developments that have and will continue to redefine manufacturing, medicine, safety, and so much more.

[ SRI ]

A drone show from CollMot, who seems to use much larger drones than anyone else.

[ CollMot ]



Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.

ICRA 2022: 23–27 May 2022, PHILADELPHIAIEEE ARSO 2022: 28–30 May 2022, LONG BEACH, CALIF.RSS 2022: 21–1 June 2022, NEW YORK CITYERF 2022: 28–30 June 2022, ROTTERDAM, NETHERLANDSRoboCup 2022: 11–17 July 2022, BANGKOKIEEE CASE 2022: 20–24 August 2022, MEXICO CITYCLAWAR 2022: 12–14 September 2022, AZORES, PORTUGAL

Enjoy today's videos!

I'm not sure it's geographically appropriate for a Husky robot to be this close to penguins in Antarctica, but on the other hand, who cares, because I am all in for robots and penguins.

The project consists of a hybrid (autonomous and remote-controlled) Husky UGV-based robot named ECHO that carries a variety of sensors including a camera and an RFID antenna to read RFID tags from chipped penguins (the kind of chips that are also used to chip dogs and cats). With the RFID scanner, ECHO will scan penguins to assess their breeding status and survival success. Overall, the robot will be able to track individual penguins throughout their lifetimes, allowing researchers to gather data for behavioral and population dynamics research.

[ Clearpath ]

Snap has launched a little camera drone called Pixy. It's nothing special, but that's fine: it looks to be small, safe, and quite easy to use. And I really appreciate that this video seems to show actual footage from the drone, which is not fantastic, but totally workable.

$250 seems a bit steep, but perhaps the safe form factor and ease of use could make it worthwhile.

[ Pixy ]

This is pretty awesome- it's a RoboCup standard platform event where the robots are operating fully autonomously. Watch right after kick-off as the robot in the black jersey (closest to the ball) books it off screen to the left. As it turns out, she (her name is Sarah) went deep into the opponent's half, where she camped out by the goal, in a perfect position to recieve a brillant pass.

[ B-Human ]

GITAI has already demonstrated its robotic arm inside of the International Space Station, and now it look like they're getting ready to work outside of the station as well.

[ GITAI ]

Things that I want robots to do so that I don't have to: Waste sorting.

Weird to have them call the robot both "she" and "unmanned" in practically the same sentence.

[ ZenRobotics ]

At Agility, we make robots that are made for work. Our expertise is marrying design, software, and hardware to build robots that are capable of doing limitless tasks as part of a blended human-robot workforce.

OK, I really want to know if Digit can use that step stool at the back of the trailer.

[ Agility ]

Zimbabwe Flying Labs' Tawanda Chihambakwe shares how Zimbabwe Flying Labs started using drones for STEM programs and how drones impact conservation and agriculture.

[ ZFL ]

Robotics has the potential to revolutionize our daily lives, enabling humans to do things never thought possible. SRI is at the forefront of developments that have and will continue to redefine manufacturing, medicine, safety, and so much more.

[ SRI ]

A drone show from CollMot, who seems to use much larger drones than anyone else.

[ CollMot ]



This sponsored article is brought to you by Robotics Summit & Expo.

The Robotics Summit & Expo is returning to Boston on May 10-11 at the Boston Convention and Exhibition Center!


This international event will bring attendees content that will help them to design, development, manufacture, and deliver commercial-class robots.


Register now and save 25% on your full conference pass by using code RSE25 at checkout.


This year's program has an exceptional lineup of speakers covering trending topics in the industry such as interoperability, cloud technology, autonomous mobile robots, human scale automation, collaborative robots, motion control and so much more within the five dedicated tracks of the program.

Attendees will hear keynote presentations from industry thought leaders including:

  • Brian Gerkey, Co-founder/CEO, Open Robotics: "Robotics Needs a Babelfish: The Skinny on Robot Interoperability"
  • Rick Faulk, CEO, Locus Robotics Robotics: "Automation in the Warehouse: Optimizing Productivity with Business Intelligence"
  • Jon Hirschtick, General Manager of Onshape and Atlas, PTC: "The Future of Product Design in a Connected World"
  • Melonee Wise, VP of Robotics Automation, Zebra Technologies: "Why the Cloud is a Force Multiplier for Robotics"
  • Greg Smith, President, Industrial Automation Group, Teradyne: "Collaborative Robotics: Resolving the Manufacturing Labor Crisis, Creating New Opportunities"
  • Kevin Blankespoor, VP & General Manager of Warehouse Robotics, Boston Dynamics: "The Next Generation of Mobile Robot Applications"

Not only does our event provide our attendees with educational sessions and access to some of the leading robotics companies around the nation but we also have complimentary events and unlimited networking opportunities for our attendees, including a reception on the expo floor, a career fair after the event, and a chance to walk Boston Dynamic's Spot quadruped.

Attendees will have access to two additional co-located events: The Healthcare Robotics Engineering Forum and DeviceTalks Boston.

For an additional bonus, you can save 25% on your full conference pass right now by using code RSE25 at checkout!



This sponsored article is brought to you by Robotics Summit & Expo.

The Robotics Summit & Expo is returning to Boston on May 10-11 at the Boston Convention and Exhibition Center!


This international event will bring attendees content that will help them to design, development, manufacture, and deliver commercial-class robots.


Register now and save 25% on your full conference pass by using code RSE25 at checkout.


This year's program has an exceptional lineup of speakers covering trending topics in the industry such as interoperability, cloud technology, autonomous mobile robots, human scale automation, collaborative robots, motion control and so much more within the five dedicated tracks of the program.

Attendees will hear keynote presentations from industry thought leaders including:

  • Brian Gerkey, Co-founder/CEO, Open Robotics: "Robotics Needs a Babelfish: The Skinny on Robot Interoperability"
  • Rick Faulk, CEO, Locus Robotics Robotics: "Automation in the Warehouse: Optimizing Productivity with Business Intelligence"
  • Jon Hirschtick, General Manager of Onshape and Atlas, PTC: "The Future of Product Design in a Connected World"
  • Melonee Wise, VP of Robotics Automation, Zebra Technologies: "Why the Cloud is a Force Multiplier for Robotics"
  • Greg Smith, President, Industrial Automation Group, Teradyne: "Collaborative Robotics: Resolving the Manufacturing Labor Crisis, Creating New Opportunities"
  • Kevin Blankespoor, VP & General Manager of Warehouse Robotics, Boston Dynamics: "The Next Generation of Mobile Robot Applications"

Not only does our event provide our attendees with educational sessions and access to some of the leading robotics companies around the nation but we also have complimentary events and unlimited networking opportunities for our attendees, including a reception on the expo floor, a career fair after the event, and a chance to walk Boston Dynamic's Spot quadruped.

Attendees will have access to two additional co-located events: The Healthcare Robotics Engineering Forum and DeviceTalks Boston.

For an additional bonus, you can save 25% on your full conference pass right now by using code RSE25 at checkout!

Pages