Two New Papers: Learning to Fling and Singulate Fabrics
The system for our IROS 2022 paper on singulating layers of cloth with tactile sensing.
In collaboration with my colleagues at Berkeley and CMU, we recently uploaded two papers to arXiv on robotic fabric manipulation:
- Efficiently Learning Single-Arm Fling Motions to Smooth Garments, for ISRR 2022.
- Learning to Singulate Layers of Cloth using Tactile Feedback, for IROS 2022.
Robotic fabric (or cloth) manipulation is a recurring theme in my research, and these two papers continue the trend. The first paper, which we started a while back in Spring 2021, is about dynamic fabric manipulation; it can be thought of as an extension of our earlier ICRA papers on “Robots of the Lost Arc” and “Planar Robot Casting” while incorporating ideas from Huy Ha and Shuran Song’s legendary FlingBot paper. While FlingBot uses two arms, many robots have only one arm, and we show how to parameterize the action space in a way to make the search space tractable for learning. It was really fun to work with this “DMODO” team (Dynamic Manipulation of Deformable Objects) over the last few years and I hope to continue doing so.
I am also very excited about the second paper. This is my first paper developed entirely at CMU, and it’s also my first one which incorporates tactile sensing. When I first pitched project ideas to my postdoc host while he was interviewing me last year, I suggested using tactile sensing for fabric manipulation to give a robot local information that it might not get from vision (e.g., due to occlusions), and as such I’m really happy that we got this system working.
Specifically, we focus on multi-layer fabric manipulation, which occurs all the time when trying to fold and unfold fabrics such as clothing. Grasping an incorrect number of fabric layers has been a recurring failure in our prior work on fabric smoothing and folding. As is typical for me, many of my research ideas arise out of thinking about how I can address existing failure cases. After initially trying the GelSight sensor (used by many at CMU), we ended up using the ReSkin sensor (a CMU research product … anyone seeing a trend?) which has a small form factor to allow the robot to singulate and separate layers. While the actual machine learning in this paper is a little less relative to my other papers, I’m OK with this if it’s the approach that worked the best out of what we tried. In my view there’s no need to force more complex and elegant algorithms for the sake of doing so if those are not the right tool for the problem.
Incidentally, neither of these two papers use a fabric simulator. I invested an enormous amount of time trying to get one working for the tactile sensing paper, but it didn’t work out. I’m really thankful, therefore, that my wonderful colleagues Sashank Tirumala and Thomas Weng resolved a lot of the hardware details in time for the paper.
These papers have been accepted to ISRR 2022 and IROS 2022 for presentation later this year. The conferences are at really interesting locations: Geneva, Switzerland for ISRR and Kyoto, Japan for IROS. Given its location, and the offensive, unprovoked wars going on in parts of the world, I hope ISRR will include some information about the Geneva Conventions as part of the conference experience. If you’re curious, the last ISRR was in 2019 just before COVID in Hanoi, Vietnam. I attended that conference to present a paper and blogged about it daily, a habit which I am less likely do these days due to limited bandwidth. (The main “social” aspect of ISRR 2019 was a tour of Halong Bay.)
I hope those who are going to the conferences have the chance to discuss these papers with my colleagues. For various reasons, I am not planning to attend either conference in person. At least the papers themselves are now on arXiv and fit for research consumption. Let us know if you have any questions.