Organizers: Jim Mainprice, Arunkumar Byravan, Mathew Monfort, Roberto Calandra, Stefan Schaal
As technology in autonomous robotics continues to evolve, so does the complexity of the decision problems that we expect our systems to solve. Such action policies range from low-level control of forces to high-level selection of complex strategies. These decision problems are often straightforward for humans while they remain difficult for standard robotics approaches. In this context, Learning from Demonstrations (LfD) can reduce the difficulty of defining action policies by providing expert knowledge in the form of examples of near-optimal behaviors. Understanding and formalizing LfD has been the topic of many fields of science including robotics, neuroscience, cognitive science, psychology and anthropology. However, many LfD problems are still intractable due to the embedding of exceedingly high-dimensional representations that stem from their coupling to high-dimensional observation spaces (e.g., visual, haptic or auditory). In this workshop, we will present and initiate a discussion on the techniques that could allow to handle this high-dimensionality. We are inviting experts in machine learning, cognitive science and robotics with the aim to foster collaboration and share new ideas across this multidisciplinary field. A special emphasis will be put on inverse optimal control and inverse reinforcement learning, which allow for the construction of reward signals from high-dimensional feature spaces, as well as LfD techniques based on Deep Neural-Networks.