CS Seminar, Monday, March 1, 3-4 PM, EE Packard Bldg., rm. 202 (Stanford Main Campus)
A Framework to Synergistically Combine Motion Planning in Continuous Spaces
and High-Level Planning in Discrete Spaces
Abstract:
Research in robotics has focused since its inception towards increasing the
ability of robots to plan and act on their own in order to safely complete
high-level tasks.
Toward this goal, this talk presents a multi-layered framework that
efficiently plans the sequence of motions the robot needs to execute so that
the resulting trajectory is dynamically feasible, avoids collisions, and
satisfies a given high-level specification. In distinction from traditional
motion planning, the framework can take into account high-level
specifications given by Finite State Machines, Hybrid Automata, Linear
Temporal Logic, STRIPS, and other planning-domain definition languages. Such
expressive models make it possible to specify complex tasks that frequently
arise in mobile robotics, manipulation, robotic-assisted surgery,
search-and-rescue missions, and air-traffic management.
A crucial aspect of the framework is a synergistic combination of motion
planning in continuous spaces and high-level planning in discrete spaces.
Motion planning searches for a solution by probabilistically sampling and
exploring the continuous space of collision-free and dynamically-feasible
motions. Discrete planning uses information gathered by motion planning to
identify high-level actions and regions of the continuous space that motion
planning can further explore to significantly advance the search. This
interplay allows the framework to expand the search along increasingly
feasible directions in future iterations. Physics-based simulations with
high-dimensional robotic models demonstrate significant computational
speedups over related work and show the ability of the framework to
efficiently plan valid trajectories that satisfy complex high-level
specifications.
Erion Plaku
faculty candidate
Johns Hopkins University
Bio:
Erion Plaku is a Postdoctoral Fellow at the Laboratory for Computational
Sensing and Robotics in the Department of Computer Science at Johns Hopkins
University. He received the Ph.D. degree in Computer Science from Rice
University in 2008. His research focuses on integration of motion planning
and discrete planning for human-machine cooperative or fully automatic task
performance in complex domains. Some applications include mobile robotics,
manipulation, air-traffic management, haptic exploration, and
robotic-assisted surgery. His research interests encompass robotics, hybrid
systems, AI, logic, computational geometry, data mining, and large-scale
distributed computing.
A Framework to Synergistically Combine Motion Planning in Continuous Spaces
and High-Level Planning in Discrete Spaces
Abstract:
Research in robotics has focused since its inception towards increasing the
ability of robots to plan and act on their own in order to safely complete
high-level tasks.
Toward this goal, this talk presents a multi-layered framework that
efficiently plans the sequence of motions the robot needs to execute so that
the resulting trajectory is dynamically feasible, avoids collisions, and
satisfies a given high-level specification. In distinction from traditional
motion planning, the framework can take into account high-level
specifications given by Finite State Machines, Hybrid Automata, Linear
Temporal Logic, STRIPS, and other planning-domain definition languages. Such
expressive models make it possible to specify complex tasks that frequently
arise in mobile robotics, manipulation, robotic-assisted surgery,
search-and-rescue missions, and air-traffic management.
A crucial aspect of the framework is a synergistic combination of motion
planning in continuous spaces and high-level planning in discrete spaces.
Motion planning searches for a solution by probabilistically sampling and
exploring the continuous space of collision-free and dynamically-feasible
motions. Discrete planning uses information gathered by motion planning to
identify high-level actions and regions of the continuous space that motion
planning can further explore to significantly advance the search. This
interplay allows the framework to expand the search along increasingly
feasible directions in future iterations. Physics-based simulations with
high-dimensional robotic models demonstrate significant computational
speedups over related work and show the ability of the framework to
efficiently plan valid trajectories that satisfy complex high-level
specifications.
Erion Plaku
faculty candidate
Johns Hopkins University
Bio:
Erion Plaku is a Postdoctoral Fellow at the Laboratory for Computational
Sensing and Robotics in the Department of Computer Science at Johns Hopkins
University. He received the Ph.D. degree in Computer Science from Rice
University in 2008. His research focuses on integration of motion planning
and discrete planning for human-machine cooperative or fully automatic task
performance in complex domains. Some applications include mobile robotics,
manipulation, air-traffic management, haptic exploration, and
robotic-assisted surgery. His research interests encompass robotics, hybrid
systems, AI, logic, computational geometry, data mining, and large-scale
distributed computing.