Fine manipulation involves making precise movements with robotic hands and fingers to handle small objects or perform intricate tasks, such as threading a needle. Dexterous manipulation goes further, requiring highly skilled, accurate, and versatile manipulation of objects through complex interactions among multiple fingers and joints. The ability to achieve fine and dexterous manipulation with high speed, accuracy, and dexterity is becoming increasingly important in robotics research. However, it also poses many challenges, such as frequent making and breaking of contact, real-time feedback control with high-dimensional observations, high-dimensional control spaces, and objects being in unstable configurations. Traditional methods rely on precise robot and environment models but often struggle with real-world uncertainties and lack generalizability. Despite decades of research, most demonstrations of dexterous manipulation still rely heavily on teleoperation. Achieving robust and generalizable dexterous manipulation requires advancements in perception integration, data collection, and control. Advances in robot learning, including machine learning and transfer learning, offer promising pathways to enhance robotic performance in fine and dexterous manipulation tasks. This event seeks to convene researchers from diverse disciplines to share insights on pushing this critical boundary.
This workshop aims to bring together junior and senior researchers to discuss the latest advancements, challenges, and future directions in learning-based approaches for robot fine manipulation skills, one of the most challenging areas in robotics. We will delve into the current state-of-the-art across relevant areas, including the hardware and mechanical design of dexterous manipulators, generalizable skill learning techniques, and sensing modalities such as tactile sensors and vision systems. Researchers will have opportunities to present posters, give contributed talks, and engage in thought-provoking discussions.
We will explore the following focused research questions:
Visual PerceptionThe expected attendees for these discussions and talks are researchers from academia and industry specializing in robotics and machine learning, particularly those interested in applying learning approaches to achieve fine and dexterous manipulation in robots.
(listed alphabetically)
Time (GMT+1, Germany Time Zone) | Event |
---|---|
08:30 - 08:35 | Introduction and Opening Remark |
08:35 - 08:55 | Invited Talk (Oliver Brock): Dexterous Manipulation: Why Learn It if You Can Just Do It? |
08:55 - 09:15 | Invited Talk (Stephane Doncieux): Learning to grasp: from dataset generation to application on real robots. |
09:15 - 09:35 | Invited Talk (João Silvério): Adaptive Learning and Assistance for Real-World Robotic Manipulation. |
09:35 - 10:00 | Spotlight Presentations |
10:00 - 10:50 | Coffee Break and Poster Session |
10:50 - 11:10 | Invited Talk (Xiaolong Wang): Dexterous Manipulation with Real and Sim Data |
11:10 - 11:30 | Invited Talk (Pulkit Agrawal): Towards reliable and Dexterous Manipulation |
11:30 - 11:50 | Invited Talk (Berthold Bäuml): Autonomously Learning AI: Towards Human-Level Manipulation with Dextrous Hands. |
11:50 - 13:45 | Lunch Break |
13:50 - 14:10 | Invited Talk (Tucker Hermans): What's the right recipe for learning multi-fingered manipulation? |
14:10 - 14:30 | Invited Talk (Roberto Calandra): The Importance of Touch Sensing for Dexterous Manipulation. |
14:30 - 14:50 | Invited Talk (Animesh Garg): Facets of Dexterity: Simulation, RL, and Imitation. |
14:50 - 15:15 | Spotlight Presentations |
15:15 - 16:05 | Coffee Break and Poster Session |
16:05 - 16:25 | Invited Talk (Li Yi): Learning Dexterous Manipulation from Human-Object Interaction. |
16:25 - 16:45 | Invited Talk (Renaud Detry): Assistive Manipulation: Reducing the Control Effort with Multi-Modal Intent Prediction. |
16:45 - 17:05 | Invited Talk (Yasemin Bekiroglu): Data-efficient learning from vision and touch for manipulation tasks. |
17:05 - 17:50 | Panel Discussion: Animesh Garg, Berthold Bäuml, Yasemin Bekiroglu, Roberto Calandra, Xiaolong Wang, Pulkit Agrawal, Oliver Brock |
17:50 - 18:00 | Closing Remark and Award |
Paper Name | Authors |
[1] Bridging the Human to Robot Dexterity Gap through Object-Oriented Rewards | Irmak Guzey, Yinlong Dai, Georgy Savva, Raunaq Bhirangi, Lerrel Pinto |
[2] Representing Positional Information in Generative World Models for Object Manipulation | Stefano Ferraro, Pietro Mazzaglia, Tim Verbelen, Bart Dhoedt, Sai Rajeswar |
[3] EgoMimic: Scaling Imitation Learning via Egocentric Video | Simar Kareer, Dhruv Patel, Ryan Punamiya, Pranay Mathur, Shuo Cheng, Chen Wang, Judy Hoffman, Danfei Xu |
[4] Human-Agent Joint Learning for Efficient Robot Manipulation Skill Acquisition | Shengcheng Luo, Quanquan Peng, Jun Lv, Kaiwen Hong, Katherine Rose Driggs-Campbell, Cewu Lu, Yong-Lu Li |
[5] The Role of Action Abstractions in Robot Manipulation Learning and Sim-to-Real Transfer | Elie Aljalbout, Felix Frank, Maximilian Karl, Patrick van der Smagt |
[6] DynaMo: In-Domain Dynamics Pretraining for Visuo-Motor Control | Zichen Jeff Cui, Hengkai Pan, Aadhithya Iyer, Siddhant Haldar, Lerrel Pinto |
[7] Design of an Affordable, Fully-Actuated Biomimetic Hand for Dexterous Teleoperation | Zhaoliang Wan, Zida Zhou, YANGZEHUI, Hao Ding, Senlin Yi, Zetong Bi, Hui Cheng |
[8] AnyRotate: Gravity-Invariant In-Hand Object Rotation with Sim-to-Real Touch | Max Yang, chenghua lu, Alex Church, Yijiong Lin, Christopher J. Ford, Haoran Li, Efi Psomopoulou, David A.W. Barton, Nathan F. Lepora |
[9] DexiTac: Soft Dexterous Tactile Gripping | Chenghua lu, Kailuan Tang, Max Yang, Haoran Li, Tianqi Yue, Nathan F. Lepora |
[10] Decomposing the Configuration of an Articulated Object via Graph Neural Network | Seunghyeon Lim, Kisung Shin, Nakul Gopalan, Jun Ki Lee, Byoung-Tak Zhang |
[11] Learning to Accurately Throw Paper Planes | Marcus Kornmann, Qimeng He, Alap Kshirsagar, Kai Ploeger, Jan Peters |
[12] Diff-HySAC: Diffusion-Based Hybrid Soft Actor-Critic for 6D Non-Prehensile Manipulation | Huy Le, Miroslav Gabriel, Tai Hoang, Gerhard Neumann, Vien Anh Ngo |
[13] MILES: Making Imitation Learning Easy with Self-Supervision | Georgios Papagiannis, Edward Johns |
[14] TacEx: GelSight Tactile Simulation in Isaac Sim – Combining Soft-Body and Visuotactile Simulators | Duc Huy Nguyen, Guillaume Duret, Tim Schneider, Alap Kshirsagar, Boris Belousov, Jan Peters |
[15] DROP: Dexterous Reorientation via Online Planning | Albert H. Li, Preston Culbertson, Vince Kurtz, Aaron Ames |
[16] ATK: Automatic Task-driven Keypoint selection for Policy Transfer from Simulation to Real World | Yunchu Zhang, Zhengyu Zhang, Liyiming Ke, Siddhartha Srinivasa, Abhishek Gupta |
[17] Bimanual Dexterity for Complex Tasks | Kenneth Shaw, Yulong Li, Jiahui Yang, Mohan Kumar Srirama, Muxin Liu, Haoyu Xiong, Russell Mendonca, Deepak Pathak |
[18] Object-Centric Dexterous Manipulation from Human Motion Data | Yuanpei Chen, Chen Wang, Yaodong Yang, Karen Liu |
Paper Name | Authors |
[1] BAKU: An Efficient Transformer for Multi-Task Policy Learning | Siddhant Haldar, Zhuoran Peng, Lerrel Pinto |
[2] Robot Utility Models: General Policies for Zero-Shot Deployment in New Environments | Haritheja Etukuru, Norihito Naka, Zijin Hu, Seungjae Lee, Chris Paxton, Soumith Chintala, Lerrel Pinto, Nur Muhammad Mahi Shafiullah |
[3] ScissorBot: Learning Generalizable Scissor Skill for Paper Cutting via Simulation, Imitation, and Sim2Real | Jiangran Lyu, Yuxing Chen, Tao Du, Feng Zhu, Huiquan Liu, Yizhou Wang, He Wang |
[4] Learning Precise, Contact-Rich Manipulation through Uncalibrated Tactile Skins | Venkatesh Pattabiraman, Yifeng Cao, Siddhant Haldar, Lerrel Pinto, Raunaq Bhirangi |
[5] AnySkin: Plug-and-play Skin Sensing for Robotic Touch | Raunaq Bhirangi, Venkatesh Pattabiraman, Mehmet Enes Erciyes, Yifeng Cao, Tess Hellebrekers, Lerrel Pinto |
[6] Local Policies Enable Zero-shot Long-horizon Manipulation | Murtaza Dalal, Min Liu, Walter Talbott, Chen Chen, Deepak Pathak, Jian Zhang, Russ Salakhutdinov |
[7] OPEN TEACH: A Versatile Teleoperation System for Robotic Manipulation | Aadhithya Iyer, Zhuoran Peng, Yinlong Dai, Irmak Guzey, Siddhant Haldar, Soumith Chintala, Lerrel Pinto |
[8] D(R,O) Grasp: A Unified Representation of Robot and Object Interaction for Cross-Embodiment Dexterous Grasping | Zhenyu Wei, Zhixuan Xu, Jingxiang Guo, Yiwen Hou, Chongkai Gao, Cai Zhehao, Jiayu Luo, Lin Shao |
[9] From Imitation to Refinement – Residual RL for Precise Visual Assembly | Lars Lien Ankile, Anthony Simeonov, Idan Shenfeld, Marcel Torne Villasevil, Pulkit Agrawal |
[10] 3D-ViTac: Learning Fine-Grained Manipulation with Visuo-Tactile Sensing | Binghao Huang, Yixuan Wang, Xinyi Yang, Yiyue Luo, Yunzhu Li |
[11] Watch Less, Feel More: Sim-to-Real RL for Generalizable Articulated Object Manipulation via Motion Adaptation and Impedance Control | Tan-Dzung Do, Gireesh Nandiraju, Jilong Wang, He Wang |
[12] ManiWAV: Learning Robot Manipulation from In-the-Wild Audio-Visual Data | Zeyi Liu, Cheng Chi, Eric Cousineau, Naveen Kuppuswamy, Benjamin Burchfiel, Shuran Song |
[13] Diffusion Policy Policy Optimization | Allen Z. Ren, Justin Lidard, Lars Lien Ankile, Anthony Simeonov, Pulkit Agrawal, Anirudha Majumdar, Benjamin Burchfiel, Hongkai Dai, Max Simchowitz |
[14] Probabilistic Contact Mode Planning for Multi-Finger Manipulation using Diffusion Models | Thomas Power, Abhinav Kumar, Fan Yang, Sergio Francisco Aguilera Marinovic, Soshi Iba, Rana Soltani Zarrin, Dmitry Berenson |
[15] GeoMatch++: Morphology Conditioned Geometry Matching for Multi-Embodiment Grasping | Yunze Wei, Maria Attarian, Igor Gilitschenski |
[16] DexMimicGen: Automated Data Generation for Bimanual Dexterous Manipulation via Imitation Learning | Zhenyu Jiang, Yuqi Xie, Kevin Lin, Zhenjia Xu, Weikang Wan, Ajay Mandlekar, Linxi Fan, Yuke Zhu |
[17] Catch It! Learning to Catch in Flight with Mobile Dexterous Hands | Yuanhang Zhang, Tianhai Liang, Zhenyang Chen, Yanjie Ze, Huazhe Xu |
[18] DOFS: A Real-world 3D Deformable Object Dataset with Full Spatial Information for Dynamics Model Learning | Zhen Zhang, Xiangyu Chu, TANG Yunxi, K. W. Samuel Au |
[19] Analysing the Interplay of Vision and Touch for Dexterous Insertion Tasks | Janis Lenz, Theo Gruner, Daniel Palenicek, Tim Schneider, Inga Pfenning, Jan Peters |
In this workshop, our goal is to bring together researchers from various fields of robotics, such as control, optimization, learning, planning, sensing, hardware, etc., who work on dexterous manipulation. We encourage researchers to submit work in the following areas (the list is not exhaustive):