Unusual Appendages: novel, multi-modal, or multi-functional uses for limbs, tails, and other body parts

Organizers: Amir Patel, Aaron Johson and Tom Libby

Website: https://www.cmu.edu/me/robomechanicslab/ws/rss2018.html

A long-standing goal is the realization of robots that can easily join and effectively work alongside people within our homes, manufacturing centers, and healthcare facilities. In order to achieve this vision, we need to develop robots that people are able to command, control, and communicate with in ways that are intuitive, expressive, and flexible. Recognizing this need, much attention has been paid of late to natural language speech as an effective medium for humans and robots to communicate. A primary challenge to language understanding is to relate freeform language to a robot’s world model — its understanding of our unstructured environments and the ways in which it can act in these environments. This problem dates back to the earliest days of artificial intelligence and has witnessed renewed interest with advances in machine learning and probabilistic inference.This workshop will build off the success of the previous two workshops on Model Learning for Human-Robot Collaboration at RSS 2015 and RSS 2016 and the workshop on Spatial-Semantic Representations for Robotics at RSS 2017 to bring together a multidisciplinary group of researchers working at the intersection of robotics, machine perception, natural language processing, and machine learning. The forum will provide an opportunity for people to showcase recent efforts to develop models, algorithms, and representations capable of efficiently understanding natural, unstructured methods of communication in the context of complex, unstructured environments. The program will combine invited and contributed talks with interactive discussions to provide an atmosphere for discourse on progress towards and challenges that inhibit bidirectional human-robot communication.