Dielectric elastomer actuator (DEA) is a smart material that holds promise for soft robotics due to the material’s intrinsic softness, high energy density, fast response, and reversible electromechanical characteristics. Like for most soft robotics materials, additive manufacturing (AM) can significantly benefit DEAs and is mainly applied to the unimorph DEA (UDEA) configuration. While major aspects of UDEA modeling are known, 3D printed UDEAs are subject to specific material and geometrical limitations due to the AM process and require a more thorough analysis of their design and performance. Furthermore, a figure of merit (FOM) is an analytical tool that is frequently used for planar DEA design optimization and material selection but is not yet derived for UDEA. Thus, the objective of the paper is modeling of 3D printed UDEAs, analyzing the effects of their design features on the actuation performance, and deriving FOMs for UDEAs. As a result, the derived analytical model demonstrates dependence of actuation performance on various design parameters typical for 3D printed DEAs, provides a new optimum thickness to Young’s modulus ratio of UDEA layers when designing a 3D printed DEA with fixed dielectric elastomer layer thickness, and serves as a base for UDEAs’ FOMs. The FOMs have various degrees of complexity depending on considered UDEA design features. The model was numerically verified and experimentally validated through the actuation of a 3D printed UDEA. The fabricated and tested UDEA design was optimized geometrically by controlling the thickness of each layer and from the material perspective by mixing commercially available silicones in non-standard ratios for the passive and dielectric layers. Finally, the prepared non-standard mix ratios of the silicones were characterized for their viscosity dynamics during curing at various conditions to investigate the silicones’ manufacturability through AM.
Feed aggregator
Today’s announcement of the acquisition of the Open Source Robotics Corporation by Intrinsic has generated a lot of questions, and no small amount of uncertainty about how this will affect the future of ROS. We have a bunch more information in this article, and there are also three blog posts you can take a look at: one from Open Robotics, one from Intrinsic, and one on ROS Discourse.
Earlier this week, we were able to speak with Brian Gerkey, co-founder and CEO of Open Robotics, alongside Wendy Tan White, CEO of Intrinsic, to ask them about the importance of this partnership and how it’s going to change things for the ROS community.
IEEE Spectrum: Why is Intrinsic acquiring OSRC?
Brian Gerkey: Things are really different from how they were 10 years ago, when we started OSRF. At that time, we were just starting to see companies coming onto the scene and rolling out robots in a serious way, but now, the robotics industry has really taken off. Which is a great thing—but that’s also meant that we here at Open Robotics, with our small independent team, have been feeling the pressure. We’re trying to support this really broad community, and that has become increasingly difficult for us to do as a small company with limited resources. And so, we were really excited by the opportunity to team up with Intrinsic as a partner who is philosophically aligned with us, and who is going to support us as we continue doing this open source work while also building some industry-hardened ROS solutions on top.
Wendy Tan White: When Brian talks about common alignment, our whole mission at Intrinsic is about democratizing robotics. The demand is there now, but access is still limited. It’s still very much either the preserve of researchers, or of heavy industry that’s been using robots in the same way for the last 30 years. As an example, when you wanted to build a website in the old days, you’d have to build your own servers, your own middleware, and your own front end. But now, you can knock up a website tomorrow and add commerce and actually be running a business. That kind of access isn’t there yet for robotics, and I feel like the world needs that. To me, what Brian has done with the ROS community is to try to lift that, and I think that if we join forces, we can lift everyone together.
Open Robotics’ model has been to remain independent while helping other companies do what they want to do with ROS. Why acquire Open Robotics as opposed to continuing that relationship?
White: If you think about a model like Linux and Red Hat, Red Hat became almost like a commercial or industrialized arm of Linux. That commercialization and hardening around Red Hat meant that industry was willing to commit to it more broadly, and what Brian was finding at Open Robotics is that he was starting to get that pull to build to that level, and then ended up building loads of customized things on top of ROS when really what was needed was a more industrialized platform. But he didn’t want to force the ROS community into that alone. So, the way that we see this is that we’d be more like the Red Hat to the Linux of OSRF. And we’re actually going to carry on investing in OSRF, to give it more stability. I don’t think Brian would have wanted to do this unless he felt like OSRF was going to stay independent.
Gerkey: Yes, absolutely. And I think one thing to keep in mind is that in terms of how open source communities are typically structured, the way that Open Robotics has done it for the last 10 years has been the exception. The situation where the foundation has not just a responsibility for governance and organization, but also happens to employ (directly or indirectly) the core developer team, is a very uncommon situation. Much more common is what we’re going to end up with on the other side of this week, which is that the foundation is a relatively small focused entity that does all of the non-code development activities, while the people who are developing the code are at companies like Intrinsic (but not only Intrinsic) where they’re building products and services on top.
And in terms of my own motivation here, I’ve been doing open source robot software since the late 1990s, even before ROS. I’ve built a career out of this. I wouldn’t go into this transition if I didn’t completely believe it was going to be good for that mission that I’ve committed myself to, personally and professionally. I am highly confident that this is going to be a very good move for the community, for the platform, for our team, and I think we’re going to build great things with Intrinsic.
“If you think about a model like Linux and Red Hat, Red Hat became almost like a commercial or industrialized arm of Linux … So, the way that we see this is that we’d be more like the Red Hat to the Linux of OSRF.” —Wendy Tan White
There are many other companies who contribute substantially to ROS, and who understand the value of the ROS ecosystem. Why is Intrinsic the right choice for Open Robotics?
Gerkey: We thought hard about this. When our leadership team recognized the situation we talked about earlier—where the demands on us from the robotics community were getting to the point where we were not going to be able to do justice to this whole community that we were responsible for supporting on our own—we decided that the best way for us to be able to do that would be to join with a larger partner. We went through a lengthy strategic process and considered lots and lots of partners. I approached Stefan Schaal [Chief Science Officer at Intrinsic], who was actually on my thesis proposal committee at USC 20 years ago, thinking that somewhere within the Alphabet universe there might be a good home for us. And I was honestly surprised as Stefan told me more about what Intrinsic was doing, and about their vision to (as Wendy puts it) democratize access to robotics by building a software platform that makes robotic applications easier to develop, deploy, and maintain. That sounded a whole lot like what we’re trying to do, and it was clear pretty quickly that Intrinsic was the right match.
What is Intrinsic actually acquiring?
Gerkey: The OSRC and OSRC Singapore teams will be joining Intrinsic as part of the transaction. But what’s not going is important: ROS, Gazebo, ROSCon, TurtleBot—the ownership of the trademarks, the domain names, the management of the websites, the operations of all those things, all of that remains with OSRF. And that gets back to what I was talking about earlier: that’s the traditional role for a non-profit foundation, as a steward of an open source community.
Can you tell us how much the acquisition is for?
White: We can’t talk about the amount. But I really felt that it was fair value, and it’s in our best interest to make sure that the team is happy and that OSRF is happy through this process.
How many people will be working at OSRF after the acquisition?
Gerkey: Vanessa Yamzon Orsi is going to step up and take over as CEO. Geoffrey Biggs will step up as CTO. We’re going to have some additional outside engineers to help with running some of the infrastructure. And then from there, we’ll assess and see how big OSRF needs to be. So it’s going to be a much smaller organization, but that’s on purpose, and it’s doable because it’s no longer operating the consulting business that Open Robotics has historically been known for.
How much of the core ROS code is currently being generated and maintained by the community, and how much is being generated and maintained by the team that has been acquired by Intrinsic? How has that been changing over time?
Gerkey: Our team certainly spends more of our time on the core of ROS than other organizations tend to, and that’s in part because of that legacy of expertise that started at Willow Garage. It’s also because historically, that’s been the hardest part of the ecosystem to get external contributions to. When people start using ROS and want to contribute something, they’re much more likely to want to contribute a new thing, like a driver for a new sensor, or a new algorithm, and it’s harder to get volunteers to contribute to the core. Having said that, one of the developments over the last couple of years is that we introduced the ROS 2 technical steering committee, and that has brought in core contributions from folks like Bosch and iRobot.
“OSRC teams will be joining Intrinsic as part of the transaction. But what’s not going is important: ROS, Gazebo, ROSCon, TurtleBot—the ownership of the trademarks, the domain names, the management of the websites, the operations of all those things, all of that remains with OSRF.” —Brian Gerkey
But should we be concerned that many of the folks who have been making core ROS contributions at Open Robotics since Willow will now be moving to Intrinsic, where their focus may be different?
Gerkey: That’s a totally fair question. These are people who are well known within the community for the work that they’ve done. But I think that should be one of the most encouraging things for folks who are hearing this news: the reason these people are known is that they’ve established a track record of good action within the community. They’ve spent years and years making open source contributions. And what I would ask of the community is to give them the benefit of the doubt, that they’re going to continue doing that. That’s what they’ve always done, and that’s what we intend to keep doing.
White: I think that’s the reason Brian chose us rather than the other partners he could have been with—because Intrinsic will provide that space and latitude. Why? Because it’s actually symbiotic. I’ve never seen a step change in any industry with open source unless those relationships are symbiotic. And Alphabet has a good track record of honoring that too, and striking that balance of understanding.
I’m a believer that actions and evidence speak for themselves better than me giving you some bullshit story about how it’s going to be. I hope you will hold us to it.
Broadly speaking, how close do you think the alignment is between Intrinsic’s goals as an independent company, and the goal of supporting core ROS functionality and contributing to the ROS community?
White: I think it’s very close to where Brian was trying to take the whole of Open Robotics anyway. If you grow a set of libraries and tooling organically through a community, the problem you’ve got is that for it to reach the industrial quality that businesses want, it really will take something like Intrinsic and Alphabet to make that happen. The incumbent industry suppliers have no interest in shifting to that model. The startups do, but they’re finding it really hard to break into old industry. We’re able to bridge the two, and I think that’s the difference.
Brian, you say in one of the blog posts that being part of Intrinsic is a “big opportunity” that will have “long-term benefits” for the ROS community.” Can you elaborate on that?
Gerkey: At a high level, the real advantage is going to be that there will be more sustained investment in the core platform that people have always wanted us to improve. Given the way Open Robotics operated historically, that was always a thing that we tended to do in the margins. Rarely did a customer come to us and say, “we’d like this item from your technical roadmap implemented for the next version.” It was much more like, “here’s what we need to make our application work tomorrow.” And so we’ve always had a limited ability to make the longer range investments in the platform, and so we’re going to be in a much better position to do that with Intrinsic.
To be more specific, if you look at Intrinsic’s near-term focus in industrial manufacturing, I think we can expect to see some really great synergies with the ROS Industrial community. Intrinsic has internally developed some tools and algorithms that I think would be interesting and there are discussions about how to make those contributions. So, somewhere between better and more consistent involvement in the core platform and specific improvements in industrial use cases are probably what people should look for in the near term.
“How is the community going to react? … There are certainly going to be people in the community who are not convinced; there are going to be folks out there who react negatively to this. And it’s going to be on us to bring them around over time.” —Brian Gerkey
What was your biggest concern about this partnership, and how did you resolve that concern?
Gerkey: It’s a coin toss for me between “what is my team going to think about this,” and “what is the community going to think about this.” Those were my two biggest concerns. And not because this is a bad or borderline thing and I’m going to have to convince people about some shady deal, but just more like, this is a big surprise. This has been consistently the theme as we’ve disclosed things to members of our team: the first reaction is, “what?!” Before they can even get to deciding if it’s good or bad, it’s just really different from what they expected. But then we tell them about it, and they can see it as a great opportunity, which has helped me feel better about it.
And then how is the community going to react? I mean, we’re going to find out this week. I’d say that we’ve done everything we can in good faith to structure the deal and make plans so that we are acting in the best interests of the community. We have the backing of the current OSRF to do this, and that’s a big endorsement. There are certainly going to be people in the community who are not convinced; there are going to be folks out there who react negatively to this. And it’s going to be on us to bring them around over time. We can only do that through action, we can’t do that through promises.
White: My greatest concern has been about the community. As Brian said, we sort of tested it out with a couple of folks, and even though there’s surprise, there’s normally also genuine excitement and curiosity. But there’s also some skepticism. And my own experience with dev communities around that, is that the only way to prove ourselves is to do it together.
Today’s announcement of the acquisition of the Open Source Robotics Corporation by Intrinsic has generated a lot of questions, and no small amount of uncertainty about how this will affect the future of ROS. We have a bunch more information in this article, and there are also three blog posts you can take a look at: one from Open Robotics, one from Intrinsic, and one on ROS Discourse.
Earlier this week, we were able to speak with Brian Gerkey, co-founder and CEO of Open Robotics, alongside Wendy Tan White, CEO of Intrinsic, to ask them about the importance of this partnership and how it’s going to change things for the ROS community.
IEEE Spectrum: Why is Intrinsic acquiring OSRC?
Brian Gerkey: Things are really different from how they were 10 years ago, when we started OSRF. At that time, we were just starting to see companies coming onto the scene and rolling out robots in a serious way, but now, the robotics industry has really taken off. Which is a great thing—but that’s also meant that we here at Open Robotics, with our small independent team, have been feeling the pressure. We’re trying to support this really broad community, and that has become increasingly difficult for us to do as a small company with limited resources. And so, we were really excited by the opportunity to team up with Intrinsic as a partner who is philosophically aligned with us, and who is going to support us as we continue doing this open source work while also building some industry-hardened ROS solutions on top.
Wendy Tan White: When Brian talks about common alignment, our whole mission at Intrinsic is about democratizing robotics. The demand is there now, but access is still limited. It’s still very much either the preserve of researchers, or of heavy industry that’s been using robots in the same way for the last 30 years. As an example, when you wanted to build a website in the old days, you’d have to build your own servers, your own middleware, and your own front end. But now, you can knock up a website tomorrow and add commerce and actually be running a business. That kind of access isn’t there yet for robotics, and I feel like the world needs that. To me, what Brian has done with the ROS community is to try to lift that, and I think that if we join forces, we can lift everyone together.
Open Robotics’ model has been to remain independent while helping other companies do what they want to do with ROS. Why acquire Open Robotics as opposed to continuing that relationship?
White: If you think about a model like Linux and Red Hat, Red Hat became almost like a commercial or industrialized arm of Linux. That commercialization and hardening around Red Hat meant that industry was willing to commit to it more broadly, and what Brian was finding at Open Robotics is that he was starting to get that pull to build to that level, and then ended up building loads of customized things on top of ROS when really what was needed was a more industrialized platform. But he didn’t want to force the ROS community into that alone. So, the way that we see this is that we’d be more like the Red Hat to the Linux of OSRF. And we’re actually going to carry on investing in OSRF, to give it more stability. I don’t think Brian would have wanted to do this unless he felt like OSRF was going to stay independent.
Gerkey: Yes, absolutely. And I think one thing to keep in mind is that in terms of how open source communities are typically structured, the way that Open Robotics has done it for the last 10 years has been the exception. The situation where the foundation has not just a responsibility for governance and organization, but also happens to employ (directly or indirectly) the core developer team, is a very uncommon situation. Much more common is what we’re going to end up with on the other side of this week, which is that the foundation is a relatively small focused entity that does all of the non-code development activities, while the people who are developing the code are at companies like Intrinsic (but not only Intrinsic) where they’re building products and services on top.
And in terms of my own motivation here, I’ve been doing open source robot software since the late 1990s, even before ROS. I’ve built a career out of this. I wouldn’t go into this transition if I didn’t completely believe it was going to be good for that mission that I’ve committed myself to, personally and professionally. I am highly confident that this is going to be a very good move for the community, for the platform, for our team, and I think we’re going to build great things with Intrinsic.
“If you think about a model like Linux and Red Hat, Red Hat became almost like a commercial or industrialized arm of Linux … So, the way that we see this is that we’d be more like the Red Hat to the Linux of OSRF.” —Wendy Tan White
There are many other companies who contribute substantially to ROS, and who understand the value of the ROS ecosystem. Why is Intrinsic the right choice for Open Robotics?
Gerkey: We thought hard about this. When our leadership team recognized the situation we talked about earlier—where the demands on us from the robotics community were getting to the point where we were not going to be able to do justice to this whole community that we were responsible for supporting on our own—we decided that the best way for us to be able to do that would be to join with a larger partner. We went through a lengthy strategic process and considered lots and lots of partners. I approached Stefan Schaal [Chief Science Officer at Intrinsic], who was actually on my thesis proposal committee at USC 20 years ago, thinking that somewhere within the Alphabet universe there might be a good home for us. And I was honestly surprised as Stefan told me more about what Intrinsic was doing, and about their vision to (as Wendy puts it) democratize access to robotics by building a software platform that makes robotic applications easier to develop, deploy, and maintain. That sounded a whole lot like what we’re trying to do, and it was clear pretty quickly that Intrinsic was the right match.
What is Intrinsic actually acquiring?
Gerkey: The OSRC and OSRC Singapore teams will be joining Intrinsic as part of the transaction. But what’s not going is important: ROS, Gazebo, ROSCon, TurtleBot—the ownership of the trademarks, the domain names, the management of the websites, the operations of all those things, all of that remains with OSRF. And that gets back to what I was talking about earlier: that’s the traditional role for a non-profit foundation, as a steward of an open source community.
Can you tell us how much the acquisition is for?
White: We can’t talk about the amount. But I really felt that it was fair value, and it’s in our best interest to make sure that the team is happy and that OSRF is happy through this process.
How many people will be working at OSRF after the acquisition?
Gerkey: Vanessa Yamzon Orsi is going to step up and take over as CEO. Geoffrey Biggs will step up as CTO. We’re going to have some additional outside engineers to help with running some of the infrastructure. And then from there, we’ll assess and see how big OSRF needs to be. So it’s going to be a much smaller organization, but that’s on purpose, and it’s doable because it’s no longer operating the consulting business that Open Robotics has historically been known for.
How much of the core ROS code is currently being generated and maintained by the community, and how much is being generated and maintained by the team that has been acquired by Intrinsic? How has that been changing over time?
Gerkey: Our team certainly spends more of our time on the core of ROS than other organizations tend to, and that’s in part because of that legacy of expertise that started at Willow Garage. It’s also because historically, that’s been the hardest part of the ecosystem to get external contributions to. When people start using ROS and want to contribute something, they’re much more likely to want to contribute a new thing, like a driver for a new sensor, or a new algorithm, and it’s harder to get volunteers to contribute to the core. Having said that, one of the developments over the last couple of years is that we introduced the ROS 2 technical steering committee, and that has brought in core contributions from folks like Bosch and iRobot.
“OSRC teams will be joining Intrinsic as part of the transaction. But what’s not going is important: ROS, Gazebo, ROSCon, TurtleBot—the ownership of the trademarks, the domain names, the management of the websites, the operations of all those things, all of that remains with OSRF.” —Brian Gerkey
But should we be concerned that many of the folks who have been making core ROS contributions at Open Robotics since Willow will now be moving to Intrinsic, where their focus may be different?
Gerkey: That’s a totally fair question. These are people who are well known within the community for the work that they’ve done. But I think that should be one of the most encouraging things for folks who are hearing this news: the reason these people are known is that they’ve established a track record of good action within the community. They’ve spent years and years making open source contributions. And what I would ask of the community is to give them the benefit of the doubt, that they’re going to continue doing that. That’s what they’ve always done, and that’s what we intend to keep doing.
White: I think that’s the reason Brian chose us rather than the other partners he could have been with—because Intrinsic will provide that space and latitude. Why? Because it’s actually symbiotic. I’ve never seen a step change in any industry with open source unless those relationships are symbiotic. And Alphabet has a good track record of honoring that too, and striking that balance of understanding.
I’m a believer that actions and evidence speak for themselves better than me giving you some bullshit story about how it’s going to be. I hope you will hold us to it.
Broadly speaking, how close do you think the alignment is between Intrinsic’s goals as an independent company, and the goal of supporting core ROS functionality and contributing to the ROS community?
White: I think it’s very close to where Brian was trying to take the whole of Open Robotics anyway. If you grow a set of libraries and tooling organically through a community, the problem you’ve got is that for it to reach the industrial quality that businesses want, it really will take something like Intrinsic and Alphabet to make that happen. The incumbent industry suppliers have no interest in shifting to that model. The startups do, but they’re finding it really hard to break into old industry. We’re able to bridge the two, and I think that’s the difference.
Brian, you say in one of the blog posts that being part of Intrinsic is a “big opportunity” that will have “long-term benefits” for the ROS community.” Can you elaborate on that?
Gerkey: At a high level, the real advantage is going to be that there will be more sustained investment in the core platform that people have always wanted us to improve. Given the way Open Robotics operated historically, that was always a thing that we tended to do in the margins. Rarely did a customer come to us and say, “we’d like this item from your technical roadmap implemented for the next version.” It was much more like, “here’s what we need to make our application work tomorrow.” And so we’ve always had a limited ability to make the longer range investments in the platform, and so we’re going to be in a much better position to do that with Intrinsic.
To be more specific, if you look at Intrinsic’s near-term focus in industrial manufacturing, I think we can expect to see some really great synergies with the ROS Industrial community. Intrinsic has internally developed some tools and algorithms that I think would be interesting and there are discussions about how to make those contributions. So, somewhere between better and more consistent involvement in the core platform and specific improvements in industrial use cases are probably what people should look for in the near term.
“How is the community going to react? … There are certainly going to be people in the community who are not convinced; there are going to be folks out there who react negatively to this. And it’s going to be on us to bring them around over time.” —Brian Gerkey
What was your biggest concern about this partnership, and how did you resolve that concern?
Gerkey: It’s a coin toss for me between “what is my team going to think about this,” and “what is the community going to think about this.” Those were my two biggest concerns. And not because this is a bad or borderline thing and I’m going to have to convince people about some shady deal, but just more like, this is a big surprise. This has been consistently the theme as we’ve disclosed things to members of our team: the first reaction is, “what?!” Before they can even get to deciding if it’s good or bad, it’s just really different from what they expected. But then we tell them about it, and they can see it as a great opportunity, which has helped me feel better about it.
And then how is the community going to react? I mean, we’re going to find out this week. I’d say that we’ve done everything we can in good faith to structure the deal and make plans so that we are acting in the best interests of the community. We have the backing of the current OSRF to do this, and that’s a big endorsement. There are certainly going to be people in the community who are not convinced; there are going to be folks out there who react negatively to this. And it’s going to be on us to bring them around over time. We can only do that through action, we can’t do that through promises.
White: My greatest concern has been about the community. As Brian said, we sort of tested it out with a couple of folks, and even though there’s surprise, there’s normally also genuine excitement and curiosity. But there’s also some skepticism. And my own experience with dev communities around that, is that the only way to prove ourselves is to do it together.
Today, Open Robotics, which is the organization that includes the nonprofit Open Source Robotics Foundation (OSRF) as well as the for-profit Open Source Robotics Corporation (OSRC), is announcing that OSRC is being acquired by Intrinsic, a standalone company within Alphabet that’s developing software to make industrial robots intuitive and accessible.
Open Robotics is of course the organization that spun off from Willow Garage in 2012 to provide some independent structure and guidance for ROS, the Robot Operating System. Over the past dozen-ish years, ROS has expanded from specialized software for robotics nerds into a powerful platform for research and industry, supported by an enthusiastic and highly engaged open source community. Open Robotics, meanwhile, branched out in 2016 from a strict non-profit to also take on some high-profile projects for the likes of the Toyota Research Institute and NVIDIA. It has supported itself commercially by leveraging its experience and expertise in ROS development. Open Robotics currently employs more than three dozen engineers, most of whom are part of the for-profit corporation.
Intrinsic is a recent graduate from X, Alphabet’s moonshot factory; the offshoot’s mission is to “democratize access to robotics” through software tools that give traditional industrial robots “the ability to sense, learn, and automatically make adjustments as they’re completing tasks.” This, the thinking goes, will improve versatility while lowering costs. Intrinsic is certainly not unique in harboring this vision, which can be traced back to Rethink Robotics (if not beyond). But Intrinsic is focused on the software side, relying on learning techniques and simulation to help industrial robots adapt and scale in a way that won’t place an undue burden on industries that may not be used to flexible automation. Earlier this year, Intrinsic acquired intelligent automation startup Vicarious, which had been working on AI-based approaches to making robots “as commonplace and easy to use as mobile phones.”
Intrinsic’s acquisition of Open Robotics is certainly unexpected, and the question now is what it means for the ROS community and the future of ROS itself. We’ll take a look at the information that’s available today, and then speak with Open Robotics CEO Brian Gerkey as well as Intrinsic CEO Wendy Tan White to get a better understanding of exactly what’s happening.
Before we get into the details, it’s important to understand the structure of Open Robotics, which has been kind of confusing for a long time—and probably never really mattered all that much to most people until this very moment. Open Robotics is an “umbrella brand” that includes OSRF (the Open Source Robotics Foundation), OSRC (the Open Source Robotics Corporation), and OSRC-SG, OSRC’s Singapore office. OSRF is the original non-profit Willow Garage spinout, the primary mission of which was “to support the development, distribution, and adoption of open source software for use in robotics research, education, and product development.” Which is exactly what OSRF has done. But OSRF’s status as a non-profit placed some restrictions on the ways in which it was allowed to support itself. So, in 2016, OSRF created the Open Source Robotics Corporation as a for-profit subsidiary to take on contract work doing ROS development for corporate and government clients. An OSRC office in Singapore opened in 2019. If you combine these three entities, you get Open Robotics.
The reason why these distinctions are super important today is because Intrinsic is acquiring OSRC and OSRC-SG, but not OSRF. Or, as Open Robotics CEO Brian Gerkey puts it in a blog post this morning:
Intrinsic is acquiring assets only from these for-profit subsidiaries, OSRC and OSRC-SG. OSRF continues as the independent nonprofit it’s always been, with the same mission, now with some new faces and a clearer focus on governance, community engagement, and other stewardship activities. That means there is no disruption in the day-to-day activities with respect to our core commitment to ROS, Gazebo, Open-RMF, and the entire community.To be clear: Intrinsic is not acquiring ROS. Intrinsic is not acquiring Gazebo. Intrinsic is not taking over technical roadmaps, the build infrastructure, or TurtleBot, or ROSCon. As Open Robotics’ Community Director Tully Foote says in this ROS Discourse discussion forum post: “Basically, if it is an open-source tool or project it will stay with the Foundation.” What Intrinsic is acquiring is almost all of the of the Open Robotics team, which includes many of the folks who were fundamental architects of ROS at Willow Garage, were founding members of OSRF, but have been focused primarily on the Open Robotics’ corporation side (OSRC) rather than the foundation side (OSRF) for the past five years.
Still, while ROS itself is not part of the transaction, it’s not like OSRC hasn’t been a huge driving force behind ROS development and maintenance—in large part because of the folks who work there. Now, the vast majority of those folks will be working for a different company with its own priorities and agenda that (I would argue) simply cannot be as closely aligned with the goals of the broader ROS community as was possible when OSRC was independent. And this whole thing reminds me a little bit of when Google/Alphabet swallowed a bunch of roboticists back in 2013; while those roboticists weren’t exactly never heard from again, there was certainly a real sense of disappointment and community loss.
Hopefully, this will not be the case with Intrinsic. Gerkey’s blog post delivers a note of optimism:
With Intrinsic’s investment in ROS, we anticipate long-term benefits for the entire community through increased development on the core open source platforms. The team at Intrinsic includes many long-time ROS and Gazebo users and we have come to realize how much they value the ROS community and want to maintain and contribute.For its part, Intrinsic’s blog post from CEO Wendy Tan White focuses more on how awesome the Open Robotics team is:
For years, we’ve admired Brian and his team’s relentless passion, skill, and dedication making the Robot Operating System (ROS) an essential platform for robotics developers worldwide (including us here at Intrinsic). We’re looking forward to supporting Brian and members of the OSRC team as they continue to push the boundaries of open-source development and what’s possible with ROS.There’s still a lot about this acquisition that we don’t know. We don’t know the exact circumstances surrounding it, or why it’s happening now. But it sounds like the business model of OSRC may not have been sustainable, or not compatible with Open Robotics’ broader vision, or both. We also don’t know the acquisition price, which might provide some additional context. The scariest part, however, is that we just don’t know what’s going to happen next. Both Brian Gerkey and Wendy Tan White seem to be doing their best to make the community feel comfortable with (or at least somewhat accepting of) this transition for OSRC. And I have no reason to think that they’re not being honest about what they want to happen. It’s just important to remember that Intrinsic is buying OSRC primarily because buying OSRC is good for Intrinsic.
If, as Gerkey says, this partnership turns out to be a long-term benefit for the entire ROS community, then that’s wonderful, and I’m sure that’s what we’re all hoping for. In the post from Foote on the ROS Discourse discussion forum, Intrinsic CTO Torsten Kroeger says very explicitly that “the top priority for the OSRC team is to nurture and grow the ROS community.” And according to Foote, the team will have “dedicated bandwidth to work on core ROS packages, Gazebo, and Open-RMF.” But of course, priorities can change, and however things end up, OSRC will still be owned by Intrinsic. Fundamentally, all we can do is trust that the people involved (many of whom the community knows quite well) will be doing their best to ensure that this is the best path forward for everyone.
The other thing to remember here is that, as important as the broader ROS community is, everyone at Open Robotics is also a part of the ROS community, and we should (and do) want what’s best for them. These are people who have committed a huge chunk of their lives to ROS; expecting that they’ll all keep doing so indefinitely out of inertia or obligation or whatever just not realistic or kind. If the OSRC team is excited about Intrinsic and wants to try something new, that’s fantastic, more power to them, and I hope they all get massive raises. They deserve it.
And much of what happens going forward is up to the ROS community itself, as it has always been. Are you worried about updates or packages getting maintained? Contribute some code. Worried about support? Participate in ROS Answers or add some documentation to the wiki. Worried about long-term vision or governance? There are plenty of ways to volunteer your time and expertise and enthusiasm to help keep the ROS community robust and healthy. And from the sound of things, this is exactly what the OSRC team hopes to be doing, just from inside Intrinsic instead of inside Open Robotics.
Our interview with Open Robotics CEO Brian Gerkey and Intrinsic CEO Wendy Tan White is here. And if you have specific questions, there’s a ROS Discourse thread for them here, where the Intrinsic and Open Robotics teams will be doing their best to provide answers.
Today, Open Robotics, which is the organization that includes the nonprofit Open Source Robotics Foundation (OSRF) as well as the for-profit Open Source Robotics Corporation (OSRC), is announcing that OSRC is being acquired by Intrinsic, a standalone company within Alphabet that’s developing software to make industrial robots intuitive and accessible.
Open Robotics is of course the organization that spun off from Willow Garage in 2012 to provide some independent structure and guidance for ROS, the Robot Operating System. Over the past dozen-ish years, ROS has expanded from specialized software for robotics nerds into a powerful platform for research and industry, supported by an enthusiastic and highly engaged open source community. Open Robotics, meanwhile, branched out in 2016 from a strict non-profit to also take on some high-profile projects for the likes of the Toyota Research Institute and NVIDIA. It has supported itself commercially by leveraging its experience and expertise in ROS development. Open Robotics currently employs more than three dozen engineers, most of whom are part of the for-profit corporation.
Intrinsic is a recent graduate from X, Alphabet’s moonshot factory; the offshoot’s mission is to “democratize access to robotics” through software tools that give traditional industrial robots “the ability to sense, learn, and automatically make adjustments as they’re completing tasks.” This, the thinking goes, will improve versatility while lowering costs. Intrinsic is certainly not unique in harboring this vision, which can be traced back to Rethink Robotics (if not beyond). But Intrinsic is focused on the software side, relying on learning techniques and simulation to help industrial robots adapt and scale in a way that won’t place an undue burden on industries that may not be used to flexible automation. Earlier this year, Intrinsic acquired intelligent automation startup Vicarious, which had been working on AI-based approaches to making robots “as commonplace and easy to use as mobile phones.”
Intrinsic’s acquisition of Open Robotics is certainly unexpected, and the question now is what it means for the ROS community and the future of ROS itself. We’ll take a look at the information that’s available today, and then speak with Open Robotics CEO Brian Gerkey as well as Intrinsic CEO Wendy Tan White to get a better understanding of exactly what’s happening.
Before we get into the details, it’s important to understand the structure of Open Robotics, which has been kind of confusing for a long time—and probably never really mattered all that much to most people until this very moment. Open Robotics is an “umbrella brand” that includes OSRF (the Open Source Robotics Foundation), OSRC (the Open Source Robotics Corporation), and OSRC-SG, OSRC’s Singapore office. OSRF is the original non-profit Willow Garage spinout, the primary mission of which was “to support the development, distribution, and adoption of open source software for use in robotics research, education, and product development.” Which is exactly what OSRF has done. But OSRF’s status as a non-profit placed some restrictions on the ways in which it was allowed to support itself. So, in 2016, OSRF created the Open Source Robotics Corporation as a for-profit subsidiary to take on contract work doing ROS development for corporate and government clients. An OSRC office in Singapore opened in 2019. If you combine these three entities, you get Open Robotics.
The reason why these distinctions are super important today is because Intrinsic is acquiring OSRC and OSRC-SG, but not OSRF. Or, as Open Robotics CEO Brian Gerkey puts it in a blog post this morning:
Intrinsic is acquiring assets only from these for-profit subsidiaries, OSRC and OSRC-SG. OSRF continues as the independent nonprofit it’s always been, with the same mission, now with some new faces and a clearer focus on governance, community engagement, and other stewardship activities. That means there is no disruption in the day-to-day activities with respect to our core commitment to ROS, Gazebo, Open-RMF, and the entire community.To be clear: Intrinsic is not acquiring ROS. Intrinsic is not acquiring Gazebo. Intrinsic is not taking over technical roadmaps, the build infrastructure, or TurtleBot, or ROSCon. As Open Robotics’ Community Director Tully Foote says in this ROS Discourse discussion forum post: “Basically, if it is an open-source tool or project it will stay with the Foundation.” What Intrinsic is acquiring is almost all of the of the Open Robotics team, which includes many of the folks who were fundamental architects of ROS at Willow Garage, were founding members of OSRF, but have been focused primarily on the Open Robotics’ corporation side (OSRC) rather than the foundation side (OSRF) for the past five years.
Still, while ROS itself is not part of the transaction, it’s not like OSRC hasn’t been a huge driving force behind ROS development and maintenance—in large part because of the folks who work there. Now, the vast majority of those folks will be working for a different company with its own priorities and agenda that (I would argue) simply cannot be as closely aligned with the goals of the broader ROS community as was possible when OSRC was independent. And this whole thing reminds me a little bit of when Google/Alphabet swallowed a bunch of roboticists back in 2013; while those roboticists weren’t exactly never heard from again, there was certainly a real sense of disappointment and community loss.
Hopefully, this will not be the case with Intrinsic. Gerkey’s blog post delivers a note of optimism:
With Intrinsic’s investment in ROS, we anticipate long-term benefits for the entire community through increased development on the core open source platforms. The team at Intrinsic includes many long-time ROS and Gazebo users and we have come to realize how much they value the ROS community and want to maintain and contribute.For its part, Intrinsic’s blog post from CEO Wendy Tan White focuses more on how awesome the Open Robotics team is:
For years, we’ve admired Brian and his team’s relentless passion, skill, and dedication making the Robot Operating System (ROS) an essential platform for robotics developers worldwide (including us here at Intrinsic). We’re looking forward to supporting Brian and members of the OSRC team as they continue to push the boundaries of open-source development and what’s possible with ROS.There’s still a lot about this acquisition that we don’t know. We don’t know the exact circumstances surrounding it, or why it’s happening now. But it sounds like the business model of OSRC may not have been sustainable, or not compatible with Open Robotics’ broader vision, or both. We also don’t know the acquisition price, which might provide some additional context. The scariest part, however, is that we just don’t know what’s going to happen next. Both Brian Gerkey and Wendy Tan White seem to be doing their best to make the community feel comfortable with (or at least somewhat accepting of) this transition for OSRC. And I have no reason to think that they’re not being honest about what they want to happen. It’s just important to remember that Intrinsic is buying OSRC primarily because buying OSRC is good for Intrinsic.
If, as Gerkey says, this partnership turns out to be a long-term benefit for the entire ROS community, then that’s wonderful, and I’m sure that’s what we’re all hoping for. In the post from Foote on the ROS Discourse discussion forum, Intrinsic CTO Torsten Kroeger says very explicitly that “the top priority for the OSRC team is to nurture and grow the ROS community.” And according to Foote, the team will have “dedicated bandwidth to work on core ROS packages, Gazebo, and Open-RMF.” But of course, priorities can change, and however things end up, OSRC will still be owned by Intrinsic. Fundamentally, all we can do is trust that the people involved (many of whom the community knows quite well) will be doing their best to ensure that this is the best path forward for everyone.
The other thing to remember here is that, as important as the broader ROS community is, everyone at Open Robotics is also a part of the ROS community, and we should (and do) want what’s best for them. These are people who have committed a huge chunk of their lives to ROS; expecting that they’ll all keep doing so indefinitely out of inertia or obligation or whatever just not realistic or kind. If the OSRC team is excited about Intrinsic and wants to try something new, that’s fantastic, more power to them, and I hope they all get massive raises. They deserve it.
And much of what happens going forward is up to the ROS community itself, as it has always been. Are you worried about updates or packages getting maintained? Contribute some code. Worried about support? Participate in ROS Answers or add some documentation to the wiki. Worried about long-term vision or governance? There are plenty of ways to volunteer your time and expertise and enthusiasm to help keep the ROS community robust and healthy. And from the sound of things, this is exactly what the OSRC team hopes to be doing, just from inside Intrinsic instead of inside Open Robotics.
Our interview with Open Robotics CEO Brian Gerkey and Intrinsic CEO Wendy Tan White is here. And if you have specific questions, there’s a ROS Discourse thread for them here, where the Intrinsic and Open Robotics teams will be doing their best to provide answers.
Socio-conversational systems are dialogue systems, including what are sometimes referred to as chatbots, vocal assistants, social robots, and embodied conversational agents, that are capable of interacting with humans in a way that treats both the specifically social nature of the interaction and the content of a task. The aim of this paper is twofold: 1) to uncover some places where the compartmentalized nature of research conducted around socio-conversational systems creates problems for the field as a whole, and 2) to propose a way to overcome this compartmentalization and thus strengthen the capabilities of socio-conversational systems by defining common challenges. Specifically, we examine research carried out by the signal processing, natural language processing and dialogue, machine/deep learning, social/affective computing and social sciences communities. We focus on three major challenges for the development of effective socio-conversational systems, and describe ways to tackle them.
Sometime next year, an autonomous robot might deliver food from an airport restaurant to your gate.
The idea for Ottobot, a delivery robot, came out of a desire to help restaurants meet the increased demand for takeout orders during the COVID-19 pandemic. Ottobot can find its way around indoor spaces where GPS can’t penetrate.
Founded 2020
Headquarters Santa Monica, Calif.
Founders Ritukar Vijay, Pradyot Korupolu, Ashish Gupta and Hardik SharmaOttobot is the brainchild of Ritukar Vijay, Ashish Gupta, Hardik Sharma, and Pradyot Korupolu. The four founded Ottonomy in 2020 in Santa Monica, Calif. The startup now has 40 employees in the United States and India.
Ottonomy, which has raised more than US $4.5 million in funding, received a Sustainability Product of the Year Award last year from the Business Intelligence Group.
Today Ottobot is being piloted not only by restaurants but also grocery stores, postal services, and airports.
Vijay and his colleagues say they focused on three qualities: full autonomy, ease of maneuverability, and accessibility.
“The robot is not replacing any staff members; it’s aiding them in their duties,” Vijay says. “It’s rewarding seeing staff members at our pilot locations so happy about having the robot helping them do their tasks. It’s also very rewarding seeing people take their delivery order from the Ottobot.”
Focusing on autonomous technologyFor 15 years Vijay, an IEEE senior member, worked on autonomous robots and vehicles at companies including HCL Technologies, Tata Consultancy Services, and THRSL. In 2019 he joined Aptiv, an automotive technology supplier headquartered in Dublin. There he worked on BMW’s urban mobility project, which is developing autonomous transportation and traffic-control systems.
During Vijay’s time there, he noticed that Aptiv and its competitors were focusing more on developing electric cars rather than autonomous ones. He figured it was going to take a long time for autonomous cars to become mainstream, so he began to look for niche applications. He hit upon restaurants and other businesses that were struggling to keep up with deliveries.
Ottobot reduces delivery costs by up to 70 percent, Vijay says, and it can reduce carbon emissions for small-distance deliveries almost 40 percent.
Using wheelchair technology, the Ottobot can maneuver over curbs and other obstacles. robot on wheel strolling down a city sidewalk
Ottobot as an airport assistantWithin the first few months of the startup’s launch, Vijay and the Ottonomy team began working with Cincinnati/Northern Kentucky Airport. The facility wanted to give passengers the option of having food from the airport’s restaurants and convenience stores delivered to their gate, but it couldn’t find an autonomous robot that could navigate the crowded facility without GPS access, Vijay says.
To substitute for GPS, the robot used 3-D lidars, cameras, and ultrasonic sensors. The lidars provide geometric information about the environment. The cameras collect semantic and depth data, and the short-range ultrasonic sensors ensure that the Ottobot detects poles and other obstructions. The Ottonomy team wrote its own software to enable the robot to create high-information maps—a 3D digital twin of the facility.
Vijay says there’s a safety mechanism in place that lets a staff member “take over the controls if the robot can’t decide how to maneuver on its own, such as through a crowd.” The safety mechanism also notifies an Ottonomy engineer if the robot’s battery runs low on power, Vijay says.
“Imagine passengers are boarding their plane at a gate,” he says. “Those areas get very crowded. During the robot’s development process, one of our engineers joked around, saying that the only way to navigate a crowd of this size was to move sideways. We laughed at it then, but three weeks later we started developing a way for the robot to walk sideways.”
The team took its inspiration from electric-powered wheelchairs. All four of the Ottobot’s wheels are powered and can steer simultaneously—which allows it to move laterally, swerve, and take zero-radius turns.
The wheelchair technology also allows the Ottobot to maneuver outside an airport setting. The wheels can carry the robot over sidewalk curbs and other obstacles.
“It’s rewarding seeing staff members at our pilot locations so happy about having the robot helping them do their tasks.”
Ottobot is 1.5 meters tall—enough to make it visible. It can adjust its position and height so that its cargo can be reached by children, the elderly, and people with disabilities, Vijay says.
The robot’s compartments can hold products of different sizes, and they are large enough to allow it to make multiple deliveries in a single run.
To place orders, customers scan a QR code at the entrance of a business or at their gate to access Crave, a food ordering and delivery mobile app. After placing their order, customers provide their location. In an airport, the location would be the gate number. The customers then are sent a QR code that matches them to their order.
A store or restaurant employee loads the ordered items into Ottobot. The robot’s location and estimated arrival time is updated continuously on the app.
Delivery time and pricing varies by location, but on average retail orders can be delivered in as quickly as 10 minutes, while the delivery time for restaurant orders generally ranges from 20 to 25 minutes, Vijay says.
Once the robot reaches its final destination, it sends an alert to the customer’s phone. The Ottobot then scans the person’s QR code, which unlocks the compartment.
Pilot programs are being run with Rome Airport and Posten, a Norwegian postal and logistics group.
Ottonomy says it expects Ottobot to be used at airports, college campuses, restaurants, and retailers next year in Europe and North America.
Why IEEE membership is vitalBeing an IEEE member has given Vijay the opportunity to interact with other practicing engineers, he says. He attends conferences frequently and participates in online events.
“When my team and I were facing difficulties during the development of the Ottonomy robot,” he says, “I was able to reach out to the IEEE members I’m connected with for help.”
Access to IEEE publications such as IEEE Robotics and Automation Magazine, IEEE Robotics and Automations Letters, and IEEE Transactions on Automation Science and Engineering has been vital to his success, he says. His team referred to the journals throughout the Ottobot’s development and cited them in their technical papers and when completing their patent applications.
“Being an IEEE member, for me, is a no-brainer,” Vijay says.
Sometime next year, an autonomous robot might deliver food from an airport restaurant to your gate.
The idea for Ottobot, a delivery robot, came out of a desire to help restaurants meet the increased demand for takeout orders during the COVID-19 pandemic. Ottobot can find its way around indoor spaces where GPS can’t penetrate.
Founded 2020
Headquarters Santa Monica, Calif.
Founders Ritukar Vijay, Pradyot Korupolu, Ashish Gupta and Hardik SharmaOttobot is the brainchild of Ritukar Vijay, Ashish Gupta, Hardik Sharma, and Pradyot Korupolu. The four founded Ottonomy in 2020 in Santa Monica, Calif. The startup now has 40 employees in the United States and India.
Ottonomy, which has raised more than US $4.5 million in funding, received a Sustainability Product of the Year Award last year from the Business Intelligence Group.
Today Ottobot is being piloted not only by restaurants but also grocery stores, postal services, and airports.
Vijay and his colleagues say they focused on three qualities: full autonomy, ease of maneuverability, and accessibility.
“The robot is not replacing any staff members; it’s aiding them in their duties,” Vijay says. “It’s rewarding seeing staff members at our pilot locations so happy about having the robot helping them do their tasks. It’s also very rewarding seeing people take their delivery order from the Ottobot.”
Focusing on autonomous technologyFor 15 years Vijay, an IEEE senior member, worked on autonomous robots and vehicles at companies including HCL Technologies, Tata Consultancy Services, and THRSL. In 2019 he joined Aptiv, an automotive technology supplier headquartered in Dublin. There he worked on BMW’s urban mobility project, which is developing autonomous transportation and traffic-control systems.
During Vijay’s time there, he noticed that Aptiv and its competitors were focusing more on developing electric cars rather than autonomous ones. He figured it was going to take a long time for autonomous cars to become mainstream, so he began to look for niche applications. He hit upon restaurants and other businesses that were struggling to keep up with deliveries.
Ottobot reduces delivery costs by up to 70 percent, Vijay says, and it can reduce carbon emissions for small-distance deliveries almost 40 percent.
Using wheelchair technology, the Ottobot can maneuver over curbs and other obstacles. robot on wheel strolling down a city sidewalk
Ottobot as an airport assistantWithin the first few months of the startup’s launch, Vijay and the Ottonomy team began working with Cincinnati/Northern Kentucky Airport. The facility wanted to give passengers the option of having food from the airport’s restaurants and convenience stores delivered to their gate, but it couldn’t find an autonomous robot that could navigate the crowded facility without GPS access, Vijay says.
To substitute for GPS, the robot used 3-D lidars, cameras, and ultrasonic sensors. The lidars provide geometric information about the environment. The cameras collect semantic and depth data, and the short-range ultrasonic sensors ensure that the Ottobot detects poles and other obstructions. The Ottonomy team wrote its own software to enable the robot to create high-information maps—a 3D digital twin of the facility.
Vijay says there’s a safety mechanism in place that lets a staff member “take over the controls if the robot can’t decide how to maneuver on its own, such as through a crowd.” The safety mechanism also notifies an Ottonomy engineer if the robot’s battery runs low on power, Vijay says.
“Imagine passengers are boarding their plane at a gate,” he says. “Those areas get very crowded. During the robot’s development process, one of our engineers joked around, saying that the only way to navigate a crowd of this size was to move sideways. We laughed at it then, but three weeks later we started developing a way for the robot to walk sideways.”
The team took its inspiration from electric-powered wheelchairs. All four of the Ottobot’s wheels are powered and can steer simultaneously—which allows it to move laterally, swerve, and take zero-radius turns.
The wheelchair technology also allows the Ottobot to maneuver outside an airport setting. The wheels can carry the robot over sidewalk curbs and other obstacles.
“It’s rewarding seeing staff members at our pilot locations so happy about having the robot helping them do their tasks.”
Ottobot is 1.5 meters tall—enough to make it visible. It can adjust its position and height so that its cargo can be reached by children, the elderly, and people with disabilities, Vijay says.
The robot’s compartments can hold products of different sizes, and they are large enough to allow it to make multiple deliveries in a single run.
To place orders, customers scan a QR code at the entrance of a business or at their gate to access Crave, a food ordering and delivery mobile app. After placing their order, customers provide their location. In an airport, the location would be the gate number. The customers then are sent a QR code that matches them to their order.
A store or restaurant employee loads the ordered items into Ottobot. The robot’s location and estimated arrival time is updated continuously on the app.
Delivery time and pricing varies by location, but on average retail orders can be delivered in as quickly as 10 minutes, while the delivery time for restaurant orders generally ranges from 20 to 25 minutes, Vijay says.
Once the robot reaches its final destination, it sends an alert to the customer’s phone. The Ottobot then scans the person’s QR code, which unlocks the compartment.
Pilot programs are being run with Rome Airport and Posten, a Norwegian postal and logistics group.
Ottonomy says it expects Ottobot to be used at airports, college campuses, restaurants, and retailers next year in Europe and North America.
Why IEEE membership is vitalBeing an IEEE member has given Vijay the opportunity to interact with other practicing engineers, he says. He attends conferences frequently and participates in online events.
“When my team and I were facing difficulties during the development of the Ottonomy robot,” he says, “I was able to reach out to the IEEE members I’m connected with for help.”
Access to IEEE publications such as IEEE Robotics and Automation Magazine, IEEE Robotics and Automations Letters, and IEEE Transactions on Automation Science and Engineering has been vital to his success, he says. His team referred to the journals throughout the Ottobot’s development and cited them in their technical papers and when completing their patent applications.
“Being an IEEE member, for me, is a no-brainer,” Vijay says.
The production of large components currently requires cost-intensive special machine tools with large workspaces. The corresponding process chains are usually sequential and hard to scale. Furthermore, large components are usually manufactured in small batches; consequently, the planning effort has a significant share in the manufacturing costs. This paper presents a novel approach for manufacturing large components by industrial robots and machine tools through segmented manufacturing. This leads to a decoupling of component size and necessary workspace and enables a new type of flexible and scalable manufacturing system. The presented solution is based on the automatic segmentation of the CAD model of the component into segments, which are provided with predefined connection elements. The proposed segmentation strategy divides the part into segments whose structural design is adapted to the capabilities (workspace, axis configuration, etc.) of the field components available on the shopfloor. The capabilities are provided by specific information models containing a self-description. The process planning step of each segment is automated by utilizing the similarity of the segments and the self-description of the corresponding field component. The result is a transformation of a batch size one production into an automated quasi-serial production of the segments. To generate the final component geometry, the individual segments are mounted and joined by robot-guided Direct Energy Deposition. The final surface finish is achieved by post-processing using a mobile machine tool coupled to the component. The entire approach is demonstrated along the process chain for manufacturing a forming tool.
In the late 1980s, Rod Brooks and Anita Flynn published a paper in The Journal of the British Interplanetary Society with the amazing title of Fast, Cheap, and Out of Control: A Robotic Invasion of the Solar System. The paper explored the idea that instead of sending one big and complicated and extremely expensive robot to explore (say) the surface of Mars, you could instead send a whole bunch of little and simple and extremely cheap robots, while still accomplishing mission goals. The abstract of the paper concludes: “We suggest that within a few years it will be possible at modest cost to invade a planet with millions of tiny robots.”
That was 1989, we’re still nowhere near millions of tiny robots. Some things are just really hard to scale down, and building robots that are the size of bees or flies or even gnats requires advances in (among other things) sensing for autonomy as well as appropriate power systems. But progress is being made, and Sawyer Fuller, assistant professor at the University of Washington (who knows a thing or four about insect-scale flying robots), has a new article in Science Robotics that shows how it’s possible to put together the necessary sensing hardware to enable stable, autonomous flight for flying robots smaller than a grain of rice.
For a tiny flying robot to be autonomous (or for any flying robot to be autonomous, really) it needs to be able to maintain its own stability, using sensors to keep track of where it is and make sure that it doesn’t go anywhere that it doesn’t want to go. This is especially tricky for small-scale flying robots, because they can be pushed around by air currents or turbulence that larger robots can simply ignore. But it turns out that being tiny also has some advantages: Because the drag of the air itself becomes more dominant the smaller an aircraft gets, an onboard gyroscope becomes irrelevant, and you just need an accelerometer. Tie that to an optic flow camera to track motion, along with a microcontroller to do the computation, and you have everything you need.
Sawyer B. Fuller
The camera in the picture above is, somewhat incredibly, available off-the-shelf. It’s designed primarily to explore your insides, which is why the entire camera is only 0.65 millimeters tall and wide, 1.2 mm long, and weighs 1 milligram (including its multi-element lens). The sensor on this particular camera exceeds the power budget that the researchers are targeting, probably because its intended use case does not involve tiny robots with tinier batteries, but there are existing sensors of a similar size that would work.
In total, this hardware weighs 6.2 mg and uses 167 microwatts of power, which in theory could be suitable for a 10 mg flying robot, something about the size of a chonky gnat. Figuring out whether it all actually works in practice isn’t easy, since chonky robotic gnats don’t exist, so the researchers instead used a palm-sized drone running simulated sensors. Testing showed that the system was able to successfully estimate the attitude of the drone and also detect and reject disturbances from wind. In fact, its performance was comparable to an actual fruit fly, which is impressive considering how long the fruit fly has had to refine its design.
“Reducing drone size down to gnat scale only amplifies many of the benefits of insect scale,” Fuller says, “such as greater potential to harvest all needed energy from the environment and larger deployments.” Much like Brooks and Flynn’s vision for swarms of inexpensive robots, Fuller sees the kind of gnat-sized robots that these sensors will help enable as a completely new approach to autonomous exploration. “Small flying robotic insects will revolutionize low-altitude atmospheric ‘air telemetry’—remote sensing of air composition and flow—by doing so on a much more detailed and persistent basis than is possible now. They will power themselves from the sun or indoor lighting—which favors small scale. The chemical sensor might be an insect antenna, which my group demonstrated in the ‘smellicopter.’ Applications include early detection of forest fires, pest onset in agriculture, buried explosives, or mapping hazardous volatiles to find leaks of greenhouse gasses or the spread of airborne diseases.”
And if you find the whole “Fast, Cheap, and Out of Control” thing compelling and want to watch a very strange movie of the same name from 1997 featuring Rod Brooks, a lion tamer, a topiary artist, and a naked mole rat expert, here you go.
In the late 1980s, Rod Brooks and Anita Flynn published a paper in The Journal of the British Interplanetary Society with the amazing title of Fast, Cheap, and Out of Control: A Robotic Invasion of the Solar System. The paper explored the idea that instead of sending one big and complicated and extremely expensive robot to explore (say) the surface of Mars, you could instead send a whole bunch of little and simple and extremely cheap robots, while still accomplishing mission goals. The abstract of the paper concludes: “We suggest that within a few years it will be possible at modest cost to invade a planet with millions of tiny robots.”
That was 1989, we’re still nowhere near millions of tiny robots. Some things are just really hard to scale down, and building robots that are the size of bees or flies or even gnats requires advances in (among other things) sensing for autonomy as well as appropriate power systems. But progress is being made, and Sawyer Fuller, assistant professor at the University of Washington (who knows a thing or four about insect-scale flying robots), has a new article in Science Robotics that shows how it’s possible to put together the necessary sensing hardware to enable stable, autonomous flight for flying robots smaller than a grain of rice.
For a tiny flying robot to be autonomous (or for any flying robot to be autonomous, really) it needs to be able to maintain its own stability, using sensors to keep track of where it is and make sure that it doesn’t go anywhere that it doesn’t want to go. This is especially tricky for small-scale flying robots, because they can be pushed around by air currents or turbulence that larger robots can simply ignore. But it turns out that being tiny also has some advantages: Because the drag of the air itself becomes more dominant the smaller an aircraft gets, an onboard gyroscope becomes irrelevant, and you just need an accelerometer. Tie that to an optic flow camera to track motion, along with a microcontroller to do the computation, and you have everything you need.
Sawyer B. Fuller
The camera in the picture above is, somewhat incredibly, available off-the-shelf. It’s designed primarily to explore your insides, which is why the entire camera is only 0.65 millimeters tall and wide, 1.2 mm long, and weighs 1 milligram (including its multi-element lens). The sensor on this particular camera exceeds the power budget that the researchers are targeting, probably because its intended use case does not involve tiny robots with tinier batteries, but there are existing sensors of a similar size that would work.
In total, this hardware weighs 6.2 mg and uses 167 microwatts of power, which in theory could be suitable for a 10 mg flying robot, something about the size of a chonky gnat. Figuring out whether it all actually works in practice isn’t easy, since chonky robotic gnats don’t exist, so the researchers instead used a palm-sized drone running simulated sensors. Testing showed that the system was able to successfully estimate the attitude of the drone and also detect and reject disturbances from wind. In fact, its performance was comparable to an actual fruit fly, which is impressive considering how long the fruit fly has had to refine its design.
“Reducing drone size down to gnat scale only amplifies many of the benefits of insect scale,” Fuller says, “such as greater potential to harvest all needed energy from the environment and larger deployments.” Much like Brooks and Flynn’s vision for swarms of inexpensive robots, Fuller sees the kind of gnat-sized robots that these sensors will help enable as a completely new approach to autonomous exploration. “Small flying robotic insects will revolutionize low-altitude atmospheric ‘air telemetry’—remote sensing of air composition and flow—by doing so on a much more detailed and persistent basis than is possible now. They will power themselves from the sun or indoor lighting—which favors small scale. The chemical sensor might be an insect antenna, which my group demonstrated in the ‘smellicopter.’ Applications include early detection of forest fires, pest onset in agriculture, buried explosives, or mapping hazardous volatiles to find leaks of greenhouse gasses or the spread of airborne diseases.”
And if you find the whole “Fast, Cheap, and Out of Control” thing compelling and want to watch a very strange movie of the same name from 1997 featuring Rod Brooks, a lion tamer, a topiary artist, and a naked mole rat expert, here you go.
Eight years and 14 million views ago, ETH Zurich introduced the Cubli, a robotic cube that can dynamically balance on a single point. It’s magical to watch, but at the same time, fairly straightforward to understand: there are three reaction wheels within the Cubli, one for each axis. And in a vivid demonstration of Newton’s third law, spinning up a reaction wheel exerts a force on the cube in the opposite direction, resulting in precision control over roll, pitch and yaw that allows the Cubli to balance itself, move around, and even jump.
This is very cool, but obviously, controlling the Cubli in three axes requires three reaction wheels. If you took out a reaction wheel, one of the Cubli’s axes would just do whatever it wanted, and if you took out two reaction wheels, then surely it would topple over, right?
Right…?
Figuring that an appropriate number of actuated degrees of freedom for a self-balancing cube was somehow too easy, researchers from ETH Zurich (Matthias Hofer, Michael Muehlebach, and Raffaello D’Andrea) decided to build a One-Wheel Cubli, which manages to balance on a point just like the original Cubli, except with only one single reaction wheel. Whoa.
The One-Wheel Cubli (OWC) uses its single reaction wheel to control itself in both pitch and roll. The yaw degree is uncontrolled, meaning that the OWC can spin around on its pivot point, although thanks to friction, it doesn’t. Having more degrees of freedom than actuators (in this case, reaction wheels) means that the OWC is what’s called underactuated. But obviously, full control over two very separate axes is required to pull off this balancing act. So how does it work?
Designer Matthias Hofer explains that you can think of the One-Wheel Cubli’s balancing like trying to balance both a pen and a broomstick vertically on your palm, if you also imagine that you only have to worry about balancing them on one axis—as in, they’ll only tip towards you or away from you, and you can move your palm underneath them to compensate. The pen, shorter, will be harder to balance and require small, rapid movements of your palm. Meanwhile, the longer broomstick is much easier to balance, and you can do so with slower movements. This is essentially the working principle of the OWC: you may only have one control input to work with, but the small fast movements and the large slow movements are decoupled enough that one actuator can manage them both independently, by making the small fast movements within the large slow movements. And this, incidentally, is the reason for that long beam with the weights on the end that differentiates the One-Wheel Cubli from the original Cubli: it’s there to maximize that difference in inertia between the two axes you’re trying to independently control.
“Seeing the OWC balance for the first time was counter-intuitive as the working principle is not obvious,” Hofer told IEEE Spectrum. “It was very satisfying for us, as it meant that every puzzle piece of the project that Michael Muehlebach, Raffaello D’Andrea, and I, along with our technical staff (Michael Egli and Matthias Müller), contributed to finally worked—including the theoretical analysis, the prototype development, the modeling, the state estimation, and the control design.”
All those puzzle pieces took a long time to fit together, and required years of work to get from something that would theoretically work on paper to an actual working system. After the failure of a couple of early hardware iterations, the researchers put some extra effort into a much more detailed modeling approach, which they then leveraged into the control system that was ultimately successful. One of the most important tricks, it turned out, was to carefully model exactly how the beam with the weights on the ends of it deflects. The deflection isn’t much, but it’s enough to screw everything up if you’re not careful. And as you can see in the video, the control system is successful enough that despite the underactuated nature of the OWC, it’s even able to compensate for some gentle nudging.
The One-Wheel Cubli is more than just an abstract hardware and software project. There are potential useful applications here, one of which is attitude control of satellites. Many satellites already use reaction wheels to keep them pointing in the right direction, and these reaction wheels are so critical to a satellite’s functionality that spares are typically included, which adds mass and complexity. For satellites that have long structures (like instrument booms) that provide different mass moments of inertia along different axes, the OWC’s control technique could provide an additional means of redundancy in case of multiple failures of reaction wheels.
We asked Hofer about what he might like to work on next, and it sounds like taming that uncontrolled yaw axis is a potential way to go. “An interesting extension would be to also control the yaw degree of freedom,” Hofer says. “If the reaction wheel is not mounted orthogonally to the yaw direction, it would affect both tilt angles plus the yaw direction. If all three degrees of freedom have different mass moments of inertia, the current working principle of the OWC could possibly be extended to all three degrees of freedom.”
Eight years and 14 million views ago, ETH Zurich introduced the Cubli, a robotic cube that can dynamically balance on a single point. It’s magical to watch, but at the same time, fairly straightforward to understand: there are three reaction wheels within the Cubli, one for each axis. And in a vivid demonstration of Newton’s third law, spinning up a reaction wheel exerts a force on the cube in the opposite direction, resulting in precision control over roll, pitch and yaw that allows the Cubli to balance itself, move around, and even jump.
This is very cool, but obviously, controlling the Cubli in three axes requires three reaction wheels. If you took out a reaction wheel, one of the Cubli’s axes would just do whatever it wanted, and if you took out two reaction wheels, then surely it would topple over, right?
Right…?
Figuring that an appropriate number of actuated degrees of freedom for a self-balancing cube was somehow too easy, researchers from ETH Zurich (Matthias Hofer, Michael Muehlebach, and Raffaello D’Andrea) decided to build a One-Wheel Cubli, which manages to balance on a point just like the original Cubli, except with only one single reaction wheel. Whoa.
The One-Wheel Cubli (OWC) uses its single reaction wheel to control itself in both pitch and roll. The yaw degree is uncontrolled, meaning that the OWC can spin around on its pivot point, although thanks to friction, it doesn’t. Having more degrees of freedom than actuators (in this case, reaction wheels) means that the OWC is what’s called underactuated. But obviously, full control over two very separate axes is required to pull off this balancing act. So how does it work?
Designer Matthias Hofer explains that you can think of the One-Wheel Cubli’s balancing like trying to balance both a pen and a broomstick vertically on your palm, if you also imagine that you only have to worry about balancing them on one axis—as in, they’ll only tip towards you or away from you, and you can move your palm underneath them to compensate. The pen, shorter, will be harder to balance and require small, rapid movements of your palm. Meanwhile, the longer broomstick is much easier to balance, and you can do so with slower movements. This is essentially the working principle of the OWC: you may only have one control input to work with, but the small fast movements and the large slow movements are decoupled enough that one actuator can manage them both independently, by making the small fast movements within the large slow movements. And this, incidentally, is the reason for that long beam with the weights on the end that differentiates the One-Wheel Cubli from the original Cubli: it’s there to maximize that difference in inertia between the two axes you’re trying to independently control.
“Seeing the OWC balance for the first time was counter-intuitive as the working principle is not obvious,” Hofer told IEEE Spectrum. “It was very satisfying for us, as it meant that every puzzle piece of the project that Michael Muehlebach, Raffaello D’Andrea, and I, along with our technical staff (Michael Egli and Matthias Müller), contributed to finally worked—including the theoretical analysis, the prototype development, the modeling, the state estimation, and the control design.”
All those puzzle pieces took a long time to fit together, and required years of work to get from something that would theoretically work on paper to an actual working system. After the failure of a couple of early hardware iterations, the researchers put some extra effort into a much more detailed modeling approach, which they then leveraged into the control system that was ultimately successful. One of the most important tricks, it turned out, was to carefully model exactly how the beam with the weights on the ends of it deflects. The deflection isn’t much, but it’s enough to screw everything up if you’re not careful. And as you can see in the video, the control system is successful enough that despite the underactuated nature of the OWC, it’s even able to compensate for some gentle nudging.
The One-Wheel Cubli is more than just an abstract hardware and software project. There are potential useful applications here, one of which is attitude control of satellites. Many satellites already use reaction wheels to keep them pointing in the right direction, and these reaction wheels are so critical to a satellite’s functionality that spares are typically included, which adds mass and complexity. For satellites that have long structures (like instrument booms) that provide different mass moments of inertia along different axes, the OWC’s control technique could provide an additional means of redundancy in case of multiple failures of reaction wheels.
We asked Hofer about what he might like to work on next, and it sounds like taming that uncontrolled yaw axis is a potential way to go. “An interesting extension would be to also control the yaw degree of freedom,” Hofer says. “If the reaction wheel is not mounted orthogonally to the yaw direction, it would affect both tilt angles plus the yaw direction. If all three degrees of freedom have different mass moments of inertia, the current working principle of the OWC could possibly be extended to all three degrees of freedom.”
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Enjoy today’s videos!
Liquid metal and hydrogel combine to make a soft, inflatable actuator that runs entirely on electricity without relying on external pumps.
Happy 10th anniversary to Jamie Paik’s Reconfigurable Robotics Lab at EPFL!
[ RRL ]
The manufacturing industry (largely) welcomed artificial intelligence with open arms. Less of the dull, dirty, and dangerous? Say no more. Planning for mechanical assemblies still requires more than scratching out some sketches, of course - it’s a complex conundrum that means dealing with arbitrary 3D shapes and highly constrained motion required for real-world assemblies.In a quest to ease some of said burdens, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Autodesk Research, and Texas A&M University came up with a method to automatically assemble products that’s accurate, efficient and generalizable to a wide range of complex real-world assemblies. Their algorithm efficiently determines the order for multi-part assembly, and then searches for a physically realistic motion path for each step.
[ MIT CSAIL ]
Thanks, Rachel!
Xenoforms is an installation work that consists of 3D prints of parametric models, video animations and visualizations, posters with technical diagrams, and a 6-axis 3D printer. In this work, a series of three-dimensional forms have been automatically generated by an artificial system that attempts to identify design decisions for an efficient, sustainable, and durable structure. The work provides a speculative scenario that demonstrates how an autonomous A.I. system follows its own strategies for colonizing the architectural space and as an extension to become a human symbiont.Xenoforms is a collaboration between Flexiv (and its Rizon arm) and artist Stavros Didakis.
[ Sonicon Lab ] via [ Flexiv ]
Thanks, Noah!
The latest buzz at the University of Maryland? Tiny, autonomous drones that harness the power of artificial intelligence to work together. In this case, the minute robots could one day provide backup to pollinators like honey bees, potentially securing the world’s food crops as these critical insect species face challenges from fungal disease, pesticides and climate change. The project is led by doctoral student Chahat Deep Singh M.E. ’18 of the Perception and Robotics Group, led by Professor Yiannis Aloimonos and Research Scientist Cornelia Fermüller.[ UMD ]
iRobot has a museum, which is a lot more interesting than you might think, because iRobot spends a very long time making things that are really, really not vacuums. And make sure to look closely at some of the earliest robots, because they in fact predate iRobot itself.
Some of those robots still have “IS Robotics” branding on them, which was the name of the company that Rod Brooks (and his students Colin Angle and Helen Greiner) founded in 1990. It wasn’t called “iRobot” until 2000. IT, in particular, was still part of Brooks’ lab at MIT in the mid-1990s, and was featured on a 1996 episode of “Scientific American Frontiers” which I just found on YouTube. There’s also a clip of Marc Raibert from 1987!
And just a little more of the best stuff from the museum:
[ iRobot ]
The ANYexo 2.0 is our latest prototype based on around two decades of research at the Sensory-Motor Systems Lab and Robotic Systems Lab of ETH Zürich. This video shows uncommented impressions of the main features of ANYexo 2.0 and its performance in range of motion, speed, strength, haptic transparency, and human-robot attachment system.[ ETH Zurich ]
Here are four of the finalists of this year’s KUKA Innovation Award.
[ KUKA ]
How soft should a robot foot be, anyway?
[ GVLab ]
At ANYbotics, we constantly release exciting new software features and payloads to our customers. The December 2022 update introduces major product developments that make it easier for operators to operate ANYmal, monitor gas leakages, perform high-precision reality capture, attain more insight from thermal measurements, and cover wider areas through new mobility features.[ ANYbotics ]
Take a tour through our new ABB Robotics mega factory in Shanghai, China and see how we’re bringing the physical and digital worlds together for faster, more resilient and more efficient manufacturing and research.[ ABB ]
On December 1, 2022, alum UMD Zhen Zeng of JP Morgan AI Research talked to Robotics students as a speaker in the Undergraduate Robotics Pathways & Careers Speaker Series, which aims to answer the question: “What can I do with a robotics degree?”[ UMich ]
This talk is from Nitin Sanket at WPI, on “AI-Powered Robotic Bees: A Journey Into The Mind And Body!”
The human fascination to mimic ultra-efficient living beings like insects and birds has led to the rise of small autonomous robots. Smaller robots are safer, more agile and are task-distributable as swarms. One might wonder, why do we not have small robots deployed in the wild today? I will present how the world’s first prototype of a RoboBeeHive was built using this philosophy. Finally, I will conclude with a recent theory called Novel Perception that utilizes the statistics of motion fields to tackle various class of problems from navigation and interaction. This method has the potential to be the go-to mathematical formulation for tackling the class of motion-field-based problems in robotics.[ UPenn ]
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Enjoy today’s videos!
Liquid metal and hydrogel combine to make a soft, inflatable actuator that runs entirely on electricity without relying on external pumps.
Happy 10th anniversary to Jamie Paik’s Reconfigurable Robotics Lab at EPFL!
[ RRL ]
The manufacturing industry (largely) welcomed artificial intelligence with open arms. Less of the dull, dirty, and dangerous? Say no more. Planning for mechanical assemblies still requires more than scratching out some sketches, of course - it’s a complex conundrum that means dealing with arbitrary 3D shapes and highly constrained motion required for real-world assemblies.In a quest to ease some of said burdens, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Autodesk Research, and Texas A&M University came up with a method to automatically assemble products that’s accurate, efficient and generalizable to a wide range of complex real-world assemblies. Their algorithm efficiently determines the order for multi-part assembly, and then searches for a physically realistic motion path for each step.
[ MIT CSAIL ]
Thanks, Rachel!
Xenoforms is an installation work that consists of 3D prints of parametric models, video animations and visualizations, posters with technical diagrams, and a 6-axis 3D printer. In this work, a series of three-dimensional forms have been automatically generated by an artificial system that attempts to identify design decisions for an efficient, sustainable, and durable structure. The work provides a speculative scenario that demonstrates how an autonomous A.I. system follows its own strategies for colonizing the architectural space and as an extension to become a human symbiont.Xenoforms is a collaboration between Flexiv (and its Rizon arm) and artist Stavros Didakis.
[ Sonicon Lab ] via [ Flexiv ]
Thanks, Noah!
The latest buzz at the University of Maryland? Tiny, autonomous drones that harness the power of artificial intelligence to work together. In this case, the minute robots could one day provide backup to pollinators like honey bees, potentially securing the world’s food crops as these critical insect species face challenges from fungal disease, pesticides and climate change. The project is led by doctoral student Chahat Deep Singh M.E. ’18 of the Perception and Robotics Group, led by Professor Yiannis Aloimonos and Research Scientist Cornelia Fermüller.[ UMD ]
iRobot has a museum, which is a lot more interesting than you might think, because iRobot spends a very long time making things that are really, really not vacuums. And make sure to look closely at some of the earliest robots, because they in fact predate iRobot itself.
Some of those robots still have “IS Robotics” branding on them, which was the name of the company that Rod Brooks (and his students Colin Angle and Helen Greiner) founded in 1990. It wasn’t called “iRobot” until 2000. IT, in particular, was still part of Brooks’ lab at MIT in the mid-1990s, and was featured on a 1996 episode of “Scientific American Frontiers” which I just found on YouTube. There’s also a clip of Marc Raibert from 1987!
And just a little more of the best stuff from the museum:
[ iRobot ]
The ANYexo 2.0 is our latest prototype based on around two decades of research at the Sensory-Motor Systems Lab and Robotic Systems Lab of ETH Zürich. This video shows uncommented impressions of the main features of ANYexo 2.0 and its performance in range of motion, speed, strength, haptic transparency, and human-robot attachment system.[ ETH Zurich ]
Here are four of the finalists of this year’s KUKA Innovation Award.
[ KUKA ]
How soft should a robot foot be, anyway?
[ GVLab ]
At ANYbotics, we constantly release exciting new software features and payloads to our customers. The December 2022 update introduces major product developments that make it easier for operators to operate ANYmal, monitor gas leakages, perform high-precision reality capture, attain more insight from thermal measurements, and cover wider areas through new mobility features.[ ANYbotics ]
Take a tour through our new ABB Robotics mega factory in Shanghai, China and see how we’re bringing the physical and digital worlds together for faster, more resilient and more efficient manufacturing and research.[ ABB ]
On December 1, 2022, alum UMD Zhen Zeng of JP Morgan AI Research talked to Robotics students as a speaker in the Undergraduate Robotics Pathways & Careers Speaker Series, which aims to answer the question: “What can I do with a robotics degree?”[ UMich ]
This talk is from Nitin Sanket at WPI, on “AI-Powered Robotic Bees: A Journey Into The Mind And Body!”
The human fascination to mimic ultra-efficient living beings like insects and birds has led to the rise of small autonomous robots. Smaller robots are safer, more agile and are task-distributable as swarms. One might wonder, why do we not have small robots deployed in the wild today? I will present how the world’s first prototype of a RoboBeeHive was built using this philosophy. Finally, I will conclude with a recent theory called Novel Perception that utilizes the statistics of motion fields to tackle various class of problems from navigation and interaction. This method has the potential to be the go-to mathematical formulation for tackling the class of motion-field-based problems in robotics.[ UPenn ]
When Xiaomi announced its CyberOne humanoid robot a couple of months back, it wasn’t entirely clear what the company was actually going to do with the robot. Our guess was that rather than pretending that CyberOne was going to have some sort of practical purpose, Xiaomi would use it as a way of exploring possibilities with technology that may have useful applications elsewhere, but there were no explicit suggestions that there would be any actual research to come out of it. In a nice surprise, Xiaomi roboticists have taught the robot to do something that is, if not exactly useful, at least loud: to play the drums.
The input for this performance is a MIDI file, which the robot is able to parse into drum beats. It then generates song-length sequences of coordinated whole-body trajectories which are synchronized to the music, which is tricky because the end effectors have to make sure to actuate the drums exactly on the beat. CyberOne does a pretty decent job even when it’s going back and forth across the drum kit. This is perhaps not super cutting-edge humanoid research, but it’s still interesting to see what a company like Xiaomi has been up to. And to that end, we asked Zeyu Ren, a senior hardware engineer at the Xiaomi Robotics Lab, to answer a couple of questions for us.
IEEE Spectrum: So why is Xiaomi working on a humanoid robot, anyway?
Zeyu Ren: There are three reasons why Xiaomi is working on humanoid robots. The first reason is that we are seeing a huge decline in the labor force in China, and the world. We are working on replacing the human labor force with humanoid robots even though there is a long way to go. The second reason is that we believe humanoid robots are the most technically challenging of all robot forms. By working on humanoid robots, we can also use this technology to solve problems on other robot forms, such as quadruped robots, robotic arms, and even wheeled robots. The third reason is that Xiaomi wants to be the most technically advanced company in China, and humanoid robots are sexy.
Why did you choose drumming to demonstrate your research?
Ren: After the official release of Xiaomi CyberOne on August 11, we got a lot of feedback from the public who didn’t have a background in robotics. They are more interested in seeing humanoid robots doing things that humans cannot easily do. Honestly speaking, it’s pretty difficult to find such scenarios, since we know that the first prototype of CyberOne is far behind humans.
But one day, one of our engineers who had just begun to play drums suggested that drumming may be an exception. She thought that compared to rookie drummers, humanoid robots have more advantages in hand-foot coordinated motion and rhythmic control. We all thought it was a good idea, and drumming itself is super cool and interesting. So we choose drumming to demonstrate our research.
What was the most challenging part of this research?
Ren: The most challenging part of this research was that when receiving the long sequences of drum beats, CyberOne needs to assign sequences to each arm and leg and generate continuous collision-free whole-body trajectories within the hardware constraints. So, we extract the basic beats and build our drum beat motion trajectory library offline by optimization. Then, CyberOne can generate continuous trajectories consistent with any drum score. This approach gives more freedom to CyberOne playing drums, and is only limited by the robotics capability.
What different things do you hope that this research will help your robot do in the future?
Ren: Drumming requires CyberOne to coordinate whole-body motions to achieve a fast, accurate, and large range of movement. We first want to find the limit of our robot in terms of hardware and software to provide a reference for the next-generation design. Also, through this research, we have formed a complete set of automatic drumming methods for robots to perform different songs, and this experience also helps us to more quickly realize the development of other musical instruments to be played by robots.
What are you working on next?
Ren: We are working on the second generation of CyberOne, and hope to further improve its locomotion and manipulation ability. On the hardware level, we plan to add more degrees of freedom, integrate self-developed dexterous hands, and add more sensors. On the software level, more robust control algorithms for locomotion and vision will be developed.
When Xiaomi announced its CyberOne humanoid robot a couple of months back, it wasn’t entirely clear what the company was actually going to do with the robot. Our guess was that rather than pretending that CyberOne was going to have some sort of practical purpose, Xiaomi would use it as a way of exploring possibilities with technology that may have useful applications elsewhere, but there were no explicit suggestions that there would be any actual research to come out of it. In a nice surprise, Xiaomi roboticists have taught the robot to do something that is, if not exactly useful, at least loud: to play the drums.
The input for this performance is a MIDI file, which the robot is able to parse into drum beats. It then generates song-length sequences of coordinated whole-body trajectories which are synchronized to the music, which is tricky because the end effectors have to make sure to actuate the drums exactly on the beat. CyberOne does a pretty decent job even when it’s going back and forth across the drum kit. This is perhaps not super cutting-edge humanoid research, but it’s still interesting to see what a company like Xiaomi has been up to. And to that end, we asked Zeyu Ren, a senior hardware engineer at the Xiaomi Robotics Lab, to answer a couple of questions for us.
IEEE Spectrum: So why is Xiaomi working on a humanoid robot, anyway?
Zeyu Ren: There are three reasons why Xiaomi is working on humanoid robots. The first reason is that we are seeing a huge decline in the labor force in China, and the world. We are working on replacing the human labor force with humanoid robots even though there is a long way to go. The second reason is that we believe humanoid robots are the most technically challenging of all robot forms. By working on humanoid robots, we can also use this technology to solve problems on other robot forms, such as quadruped robots, robotic arms, and even wheeled robots. The third reason is that Xiaomi wants to be the most technically advanced company in China, and humanoid robots are sexy.
Why did you choose drumming to demonstrate your research?
Ren: After the official release of Xiaomi CyberOne on August 11, we got a lot of feedback from the public who didn’t have a background in robotics. They are more interested in seeing humanoid robots doing things that humans cannot easily do. Honestly speaking, it’s pretty difficult to find such scenarios, since we know that the first prototype of CyberOne is far behind humans.
But one day, one of our engineers who had just begun to play drums suggested that drumming may be an exception. She thought that compared to rookie drummers, humanoid robots have more advantages in hand-foot coordinated motion and rhythmic control. We all thought it was a good idea, and drumming itself is super cool and interesting. So we choose drumming to demonstrate our research.
What was the most challenging part of this research?
Ren: The most challenging part of this research was that when receiving the long sequences of drum beats, CyberOne needs to assign sequences to each arm and leg and generate continuous collision-free whole-body trajectories within the hardware constraints. So, we extract the basic beats and build our drum beat motion trajectory library offline by optimization. Then, CyberOne can generate continuous trajectories consistent with any drum score. This approach gives more freedom to CyberOne playing drums, and is only limited by the robotics capability.
What different things do you hope that this research will help your robot do in the future?
Ren: Drumming requires CyberOne to coordinate whole-body motions to achieve a fast, accurate, and large range of movement. We first want to find the limit of our robot in terms of hardware and software to provide a reference for the next-generation design. Also, through this research, we have formed a complete set of automatic drumming methods for robots to perform different songs, and this experience also helps us to more quickly realize the development of other musical instruments to be played by robots.
What are you working on next?
Ren: We are working on the second generation of CyberOne, and hope to further improve its locomotion and manipulation ability. On the hardware level, we plan to add more degrees of freedom, integrate self-developed dexterous hands, and add more sensors. On the software level, more robust control algorithms for locomotion and vision will be developed.
Flapping wing micro aerial vehicles (FWMAVs) are known for their flight agility and maneuverability. These bio-inspired and lightweight flying robots still present limitations in their ability to fly in direct wind and gusts, as their stability is severely compromised in contrast with their biological counterparts. To this end, this work aims at making in-gust flight of flapping wing drones possible using an embodied airflow sensing approach combined with an adaptive control framework at the velocity and position control loops. At first, an extensive experimental campaign is conducted on a real FWMAV to generate a reliable and accurate model of the in-gust flight dynamics, which informs the design of the adaptive position and velocity controllers. With an extended experimental validation, this embodied airflow-sensing approach integrated with the adaptive controller reduces the root-mean-square errors along the wind direction by 25.15% when the drone is subject to frontal wind gusts of alternating speeds up to 2.4 m/s, compared to the case with a standard cascaded PID controller. The proposed sensing and control framework improve flight performance reliably and serve as the basis of future progress in the field of in-gust flight of lightweight FWMAVs.
The use of manipulators in space missions has become popular, as their applications can be extended to various space missions such as on-orbit servicing, assembly, and debris removal. Due to space reachability limitations, such robots must accomplish their tasks in space autonomously and under severe operating conditions such as the occurrence of faults or uncertainties. For robots and manipulators used in space missions, this paper provides a unique, robust control technique based on Model Predictive Path Integral Control (MPPI). The proposed algorithm, named Planner-Estimator MPPI (PE-MPPI), comprises a planner and an estimator. The planner controls a system, while the estimator modifies the system parameters in the case of parameter uncertainties. The performance of the proposed controller is investigated under parameter uncertainties and system component failure in the pre-capture phase of the debris removal mission. Simulation results confirm the superior performance of PE-MPPI against vanilla MPPI.