Workshop on Learning for Task and Motion Planning

LTAMP banner

This workshop will investigate the role of learning within Task and Motion Planning (TAMP). TAMP has shown remarkable capabilities in scaling to long action sequences, many objects and a variety of tasks. However, TAMP often relies on simplified models, assumes perfect world knowledge, and requires long computation time, which limits the real world applicability. To address these limitations, there has been significant recent interest in integrating learning methods into TAMP [ 1Structured deep generative models for sampling on constraint manifolds in sequential manipulation.
Joaquim Ortiz-Haro, Jung-Su Ha, Danny Driess, Marc Toussaint.
CoRL 2022.
, 2TAPS: Task-Agnostic Policy Sequencing.
Christopher Agia, Toki Migimatsu, Jiajun Wu, Jeannette Bohg.
arXiv 2022.
, 3Guided skill learning and abstraction for long-horizon manipulation.
Shuo Cheng, Danfei Xu.
CoRL Workshop on Learning, Perception, and Abstraction for Long-Horizon Planning 2022.
, 4Learning symbolic operators for task and motion planning.
Tom Silver, Rohan Chitnis, Joshua Tenenbaum, Leslie Pack Kaelbling, Tomás Lozano-Pérez.
IROS 2021.
, 5Sequence-based plan feasibility prediction for efficient task and motion planning.
Zhutian Yang, Caelan Reed Garrett, Leslie Pack Kaelbling, Tomás Lozano-Pérez, and Dieter Fox.
RSS 2023.
]. Despite this progress, there remain many open questions that must be addressed before learning-based TAMP can be applied in real-world settings and in full generality.

The goal of this workshop is to bring together a diverse set of researchers from TAMP, learning for TAMP, computer vision, classical AI planning, and NLP to discuss not only the current state of learning for TAMP, but also the broader state of learning for planning in robotics. Subtopics include: benchmarks, language, perception, skill learning, manipulation, closed-loop TAMP, and policy learning. All of these discussions will take place against the backdrop of recent progress in foundation models.

Discussion Topics

  • Benchmarks: What benchmark environments will help the community better measure the progress of TAMP and integrate learning with TAMP?
  • Large Language Models for Task Planning: LLM has shown promises in generalizable task planning. What are the potentials and challenges in integrating LLM with TAMP?
  • Perception: Perception in TAMP and the latest developments in this area, including 3D computer vision, open-world recognition, and sensor fusion.
  • Error Recovery / Closed-Loop TAMP: Techniques and principles that allow TAMP to be more robust and recover from errors and interventions.
  • Skill Learning / Manipulation: Skill learning and manipulation for TAMP, including but not limited to deep reinforcement learning and imitation learning. How do we make the learned skills generalizable and composable?
  • Model Learning: Techniques and principles for learning discrete or continuous transition models from interaction data.

Call for Papers

Important dates (AoE):

  • Submission open: May 1
  • Submission deadline: May 24
  • Paper acceptance notification: June 10 June 12
  • Camera ready due: July 1
  • Workshop: July 10

Submission types:

  • Short papers: 4 pages. Appendix is allowed with no page limit, included in the same document as main text. Paper can be work submitted to or accepted by other conferences or journals.
  • “Blue Sky” papers: We seek “Blue Sky” submissions, recommended 2-4 pages in length, that present a novel high-level perspective of the challenges associated with learning for TAMP. Preference will be given to early career academics—senior graduate students, postdocs, and pre-tenure faculty. Blue Sky submissions are expected to have only a single author. Please include “[Blue Sky]” in the paper title.

Please use this Latex paper template and submit via Open Review. Review will be single-blind so there’s no need to anonymize your document.

Tentative Schedule

The workshop will happen on July 10 in hybrid mode. The in-person location is in Daegu, Republic of Korea.

Time (KST, GMT+9) Event
09:00 am - 09:10 am Introductory Remarks
09:10 am - 10:00 am Invited Talks 1, 2, and 3
10:00 am - 10:30 am Invited Speaker Panel 1
10:30 am - 11:00 am Coffee Break / Posters 1
11:00 am - 12:00 pm Invited Talks 4, 5, and 6
12:00 pm - 1:30 pm Lunch
1:30 pm - 2:00 pm Invited Speaker Panel 2
2:00 pm - 3:00 pm Poster Spotlight Talks
3:00 pm - 3:30 pm Coffee Break / Posters 2
3:30 pm - 4:20 pm Invited Talks 7, 8, and 9
4:20 pm - 4:50 pm Invited Speaker Panel 3
4:50 pm - 5:00 pm Conclusion Remarks


Speaker Bio

Dieter Fox is the head of the UW Robotics and State Estimation Lab RSE-Lab. He is also a Senior Director of Robotics Research at Nvidia. His research is in robotics, with strong connections to artificial intelligence, computer vision, and machine learning. 

Fei-Fei Li is the inaugural Sequoia Professor in the Computer Science Department at Stanford University, and Co-Director of Stanford’s Human-Centered AI Institute. Her current research interests include cognitively inspired AI, machine learning, deep learning, computer vision and AI+healthcare especially ambient intelligent systems for healthcare delivery.

Florian Shkurti is an assistant professor in computer science at the University of Toronto, focusing on robotics, machine learning, and vision. His research group develops methods that enable robots to perceive, reason, plan, and act effectively and safely, particularly in dynamic environments and alongside humans. Application areas of his research include field robotics for environmental monitoring, visual navigation for autonomous vehicles, and mobile manipulation.

Hector Geffner is an Alexander von Humboldt Professor at the RWTH Aachen University, Germany and a Guest Wallenberg Professor at Linköping University, Sweden. Before joining RWTH, he was an ICREA Research Professor at the Universitat Pompeu Fabra (UPF) in Barcelona, Spain. Hector obtained a Ph.D. in Computer Science at UCLA and then worked at the IBM T.J. Watson Research Center in NY, and at the Universidad Simon Bolivar in Caracas. Distinctions for his work include the 1990 ACM Dissertation Award and three ICAPS Influential Paper Awards. He currently leads a project on representation learning for acting and planning (RLeap), funded by an Advanced ERC grant, where he has been studying the problemof learning different type of structures needed to act and plan like action models, general policies, general subgoal structures, and hierarchical policies.

Marc Toussaint is a professor in the area of AI & Robotics at TU Berlin. His research interests are in the intersection of AI and robotics, namely in using Machine Learning, optimization, and AI reasoning to tackle fundamental problems in robotics. He work on models and algorithms for physical reasoning, task-and-motion planning (logic-geometric programming), learning heuristics, the planning-as-inference paradigm, algorithms and methods for robotic building construction, and learning to transfer model-based strategies to reactive and adaptive real-world behavior.

Masataro Asai is a Research Staff Member at IBM Research Cambridge (MIT-IBM Watson AI Lab) after receiving a Ph.D from University of Tokyo in 2018 under Alex Fukunaga and working for IBM Research Tokyo during 2019. His main expertise is Classical Planning and Heuristic Graph Search, while his recent work focuses on the automatic identification of discrete symbolic entities that aids planning, i.e., symbol grounding, with the help of Deep Neural Networks.

Pratyusha Sharma is a PhD student in EECS at MIT advised by Prof. Antonio Torralba and Prof. Jacob Andreas. Her research goal is to understand what can be learnt from rich multimodal interactions (vision, touch and sounds) with objects (and people) in the world around us. She is also interested in developing systems that enable robots to efficiently abstract knowledge across tasks, reason, understand goals and reliably interact in the real world.

Russ Tedrake is the Toyota Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics, and Mechanical Engineering at MIT. He is interested in finding elegant control solutions for interesting (underactuated, stochastic, and/or difficult to model) dynamical systems that he can build and experiment with. He is particularly interested in finding connections between mechanics (especially non-smooth mechanics) and optimization theory which enable formal analysis and control design for complex mechanical systems. These days He is primarily interested in bringing the rigor of systems theory to robot manipulation.

Siddharth Srivastava is an assistant professor at Arizona State University. His research focuses on learning generalizable knowledge for reliable sequential decision making, AI safety, and AI assessment. He is a recipient of the NSF CAREER award, a Best Paper award at the International Conference on Automated Planning and Scheduling (ICAPS) and an Outstanding Dissertation award from the Department of Computer Science at UMass Amherst. His work on TAMP focuses on well-founded algorithms for using abstractions for integrated task and motion planning, with an emphasis on learning transferrable state and action hierarchies.