Spatial-Semantic Representations in Robotics


Organizers: Matthew Walter, Thomas Howard

Website: http://www.ece.rochester.edu/projects/rail/ssrr2017/

Much attention in the robotics community over the last several decades has focused on low-level geometric environment representations in the form of primitive feature-based maps, occupancy grids, and point clouds. As robots perform a wider variety of tasks in increasingly large and complex environments, the fidelity and richness of the environment representation becomes critical. Choosing the correct representation is important—those that are overly rich may be too complex to evaluate, while those that are simplified may not be expressive enough for difficult tasks. Effective perception algorithms should be capable of learning the appropriate fidelity and complexity of an environment representation from multimodal observations. Recognizing this need, researchers have devoted greater attention to developing spatial-semantic models that jointly express the geometric, topologic, and semantic properties of the robot's environment.

This workshop will build off of the success of two workshops organized by Walter and Howard on model learning for human-robot collaboration at RSS 2015 and RSS 2016 to bring together a multidisciplinary group of researchers working at the intersection of robotics, computer vision, simultaneous localization and mapping, and machine learning. The forum will provide an opportunity for people to showcase recent efforts to develop models and algorithms for jointly representing spatial and semantic characteristics of complex, unstructured environments. The program will combine invited and contributed talks with interactive discussions to provide an atmosphere for discourse on state-of-the-art work in learning of environment representations. This workshop is organized as an outreach activity under the NSF National Robotics Initiative award led by Walter and Howard titled “Learning Adaptive Representations for Robust Mobile Robot Navigation from Multi-Modal Interactions”.