Workshop on Learning for Task and Motion Planning

LTAMP banner

This workshop will investigate the role of learning within Task and Motion Planning (TAMP). TAMP has shown remarkable capabilities in scaling to long action sequences, many objects and a variety of tasks. However, TAMP often relies on simplified models, assumes perfect world knowledge, and requires long computation time, which limits the real world applicability. To address these limitations, there has been significant recent interest in integrating learning methods into TAMP [ 1Structured deep generative models for sampling on constraint manifolds in sequential manipulation.
Joaquim Ortiz-Haro, Jung-Su Ha, Danny Driess, Marc Toussaint.
CoRL 2022.
, 2TAPS: Task-Agnostic Policy Sequencing.
Christopher Agia, Toki Migimatsu, Jiajun Wu, Jeannette Bohg.
arXiv 2022.
, 3Guided skill learning and abstraction for long-horizon manipulation.
Shuo Cheng, Danfei Xu.
CoRL Workshop on Learning, Perception, and Abstraction for Long-Horizon Planning 2022.
, 4Learning symbolic operators for task and motion planning.
Tom Silver, Rohan Chitnis, Joshua Tenenbaum, Leslie Pack Kaelbling, Tomás Lozano-Pérez.
IROS 2021.
, 5Sequence-based plan feasibility prediction for efficient task and motion planning.
Zhutian Yang, Caelan Reed Garrett, Leslie Pack Kaelbling, Tomás Lozano-Pérez, and Dieter Fox.
RSS 2023.
]. Despite this progress, there remain many open questions that must be addressed before learning-based TAMP can be applied in real-world settings and in full generality.

The goal of this workshop is to bring together a diverse set of researchers from TAMP, learning for TAMP, computer vision, classical AI planning, and NLP to discuss not only the current state of learning for TAMP, but also the broader state of learning for planning in robotics. Subtopics include: benchmarks, language, perception, skill learning, manipulation, closed-loop TAMP, and policy learning. All of these discussions will take place against the backdrop of recent progress in foundation models.

Discussion Topics

  • Benchmarks: What benchmark environments will help the community better measure the progress of TAMP and integrate learning with TAMP?
  • Large Language Models for Task Planning: LLM has shown promise in generalizable task planning. What are the potentials and challenges in integrating LLM with TAMP?
  • Perception: Perception in TAMP and the latest developments in this area, including 3D computer vision, open-world recognition, and sensor fusion.
  • Error Recovery / Closed-Loop TAMP: Techniques and principles that allow TAMP to be more robust and recover from errors and interventions.
  • Skill Learning / Manipulation: Skill learning and manipulation for TAMP, including but not limited to deep reinforcement learning and imitation learning. How do we make the learned skills generalizable and composable?
  • Model Learning: Techniques and principles for learning discrete or continuous transition models from interaction data.

Schedule

The workshop happened on July 10 in hybrid mode. The in-person location is in Daegu, Republic of Korea.

The recording of the workshop can be accessed on YouTube.

Time (KST, GMT+9) Event
9:00 am - 9:10 am Introductory Remarks
9:10 am - 9:25 am Invited Talks 1: Siddharth Srivastava
9:25 am - 9:40 am Invited Talks 2: Pratyusha Sharma
9:40 am - 9:55 am Invited Talks 3: Dieter Fox (Remote)
9:55 am - 10:30 am Panel 1: Dieter Fox (Remote), Pratyusha Sharma, Siddharth Srivastava
10:30 am - 11:00 am Coffee Break
11:00 am - 11:15 am Spotlight Talks 1: Nishanth Kumar, Willie McClinton (Remote)
11:15 am - 11:30 am Spotlight Talks 2: Robert Gieselmann
11:30 am - 11:45 am Spotlight Talks 3: Nina Marie Moorman
11:45 am - 12:00 pm Spotlight Talks 4: Zirui Zhao
12:00 pm - 1:30 pm Lunch
1:30 pm - 3:00 pm Poster session
3:00 pm - 3:30 pm Coffee Break
3:30 pm - 3:45 pm Invited Talks 4: Hector Geffner
3:45 pm - 4:00 pm Invited Talks 5: Florian Shkurti
4:00 pm - 4:15 pm Invited Talks 6: Masataro Asai (Remote)
4:15 pm - 4:50 pm Panel 2: Masataro Asai (Remote), Hector Geffner, Florian Shkurti
4:50 pm - 5:00 pm Conclusion Remarks / Best Paper Award

Please use this Latex paper template and submit via Open Review. Review will be single-blind so there’s no need to anonymize your document.

Speakers

Speaker Bio

Dieter Fox is the head of the UW Robotics and State Estimation Lab RSE-Lab. He is also a Senior Director of Robotics Research at Nvidia. His research is in robotics, with strong connections to artificial intelligence, computer vision, and machine learning. 

Florian Shkurti is an assistant professor in computer science at the University of Toronto, focusing on robotics, machine learning, and vision. His research group develops methods that enable robots to perceive, reason, plan, and act effectively and safely, particularly in dynamic environments and alongside humans. Application areas of his research include field robotics for environmental monitoring, visual navigation for autonomous vehicles, and mobile manipulation.

Hector Geffner is an Alexander von Humboldt Professor at the RWTH Aachen University, Germany and a Guest Wallenberg Professor at Linköping University, Sweden. Before joining RWTH, he was an ICREA Research Professor at the Universitat Pompeu Fabra (UPF) in Barcelona, Spain. Hector obtained a Ph.D. in Computer Science at UCLA and then worked at the IBM T.J. Watson Research Center in NY, and at the Universidad Simon Bolivar in Caracas. Distinctions for his work include the 1990 ACM Dissertation Award and three ICAPS Influential Paper Awards. He currently leads a project on representation learning for acting and planning (RLeap), funded by an Advanced ERC grant, where he has been studying the problemof learning different type of structures needed to act and plan like action models, general policies, general subgoal structures, and hierarchical policies.

Masataro Asai is a Research Staff Member at IBM Research Cambridge (MIT-IBM Watson AI Lab) after receiving a Ph.D from University of Tokyo in 2018 under Alex Fukunaga and working for IBM Research Tokyo during 2019. His main expertise is Classical Planning and Heuristic Graph Search, while his recent work focuses on the automatic identification of discrete symbolic entities that aids planning, i.e., symbol grounding, with the help of Deep Neural Networks.

Pratyusha Sharma is a PhD student in EECS at MIT advised by Prof. Antonio Torralba and Prof. Jacob Andreas. Her research goal is to understand what can be learnt from rich multimodal interactions (vision, touch and sounds) with objects (and people) in the world around us. She is also interested in developing systems that enable robots to efficiently abstract knowledge across tasks, reason, understand goals and reliably interact in the real world.

Siddharth Srivastava is an assistant professor at Arizona State University. His research focuses on learning generalizable knowledge for reliable sequential decision making, AI safety, and AI assessment. He is a recipient of the NSF CAREER award, a Best Paper award at the International Conference on Automated Planning and Scheduling (ICAPS) and an Outstanding Dissertation award from the Department of Computer Science at UMass Amherst. His work on TAMP focuses on well-founded algorithms for using abstractions for integrated task and motion planning, with an emphasis on learning transferrable state and action hierarchies.