Organizers: Konstantinos Karydis, Nikolay Atanasov, Sergey Levine, Nick Roy, Claire Tomlin, Vijay Kumar
Recent progress in Unmanned Aerial Vehicles (UAVs) has revealed great opportunities for use of small-scale UAVs in disaster response, environment monitoring, security and inspection. While reliable GPS-denied localization and safe, aggressive maneuvering have been successfully demonstrated, closing the loop between scene understanding and action planning remains an open problem. Deep learning has emerged as an especially promising way of extracting semantic meaning suitable for high-level autonomy. Learning techniques achieve the state-of-the-art performance in object recognition and natural language processing by replacing hand-engineered features with features that get automatically extracted from training data. Learning perception and control for autonomous flight can be approached in much the same way — by replacing hand-engineered map representations with raw sensor observations and learning appropriate responses.
The workshop will bring together researchers from robot planning and control, reinforcement learning and deep learning, and formal methods to examine the challenges and opportunities in learning perception and control for safe high-speed flight in unknown environments. Of particular interest is to investigate i) how to theoretically analyze the data and structure of learning systems to provide guarantees on safety and task success, and ii) what is the effect of long-term memory and, in particular, can recurrent connections or dynamic external memory replace global map information. The goal is to report on state-of-the-art approaches, identify open problems, and devise new principles to meet the challenges faced when applying deep learning to safety-critical planning tasks that incorporate timing and memory aspects.