Keynote Speakers
Jonathan How
Massachusetts Institute of Technology
Talk Title: Navigation and Mapping for Robot Teams in Uncertain Environments
Abstract
Many robotic tasks require robot teams to autonomously operate in challenging, partially observable, dynamic environments with limited field-of-view sensors. In such scenarios, individual robots need to be able to plan/execute safe paths on short timescales to avoid imminent collisions. Robots can leverage high-level semantic descriptions of the environment to plan beyond their immediate sensing horizon. For mapping on longer timescales, the agents must also be able to align and fuse imperfect and partial observations to construct a consistent and unified representation of the environment. Furthermore, these tasks must be done autonomously onboard, which typically adds significant complexity to the system. This talk will highlight three recently developed solutions to these challenges that have been implemented to (1) robustly plan paths and demonstrate high-speed agile flight of a quadrotor in unknown, cluttered environments; and (2) plan beyond the line-of-sight by utilizing the learned context within the local vicinity, with applications in last-mile delivery. We further present a multi-way data association algorithm to correctly synchronize partial and noisy representations and fuse maps acquired by (single or multiple) robots, showcased on a simultaneous localization and mapping (SLAM) application.Bio
Jonathan P. How is the Richard C. Maclaurin Professor of Aeronautics and Astronautics at the Massachusetts Institute of Technology. He received a B.A.Sc. from the University of Toronto in 1987, and his S.M. and Ph.D. in Aeronautics and Astronautics from MIT in 1990 and 1993, respectively. Prior to joining MIT in 2000, he was an assistant professor in the Department of Aeronautics and Astronautics at Stanford University. He was the editor-in-chief of the IEEE Control Systems Magazine (2015-19) and was elected to the Board of Governors of the IEEE Control System Society (CSS) in 2019. His research focuses on robust planning and learning under uncertainty with an emphasis on multiagent systems. His work has been recognized with multiple awards, including the 2020 AIAA Intelligent Systems Award, the 2002 Institute of Navigation Burka Award, the 2011 IFAC Automatica award for best applications paper, the 2015 AeroLion Technologies Outstanding Paper Award for Unmanned Systems, the 2015 IEEE Control Systems Society Video Clip Contest, the IROS Best Paper Award on Cognitive Robotics (2017 and 2019) and three AIAA Best Paper in Conference Awards (2011-2013). He was awarded the Air Force Commander's Public Service Award (2017). He is a Fellow of IEEE and AIAA.
Simon Lucey
Carnegie Mellon University
Talk Title: Geometric reasoning in machine vision with using only 2D supervision
Abstract
Machine vision has made tremendous progress over the last decade with respect to perception. Much of this progress can be attributed to two factors: the ability of deep neural networks (DNNs) to reliably learn a direct relationship between images and labels; and access to a plentiful number of images with corresponding labels. We often refer to these labels as supervision – because they directly supervise what we want the vision algorithm to predict when presented with an input image. 2D labels are relatively easy for the computer vision community to come by: human annotators are hired to draw – literally through mouse clicks on a computer screen – boxes, points, or regions for a few cents per image. But how to obtain 3D labels is an open problem for the robotics and vision community. Rendering through computer generated imagery (CGI) is problematic, since the synthetic images seldom match the appearance and geometry of the objects we encounter in the real world. Hand annotation by humans is preferable, but current strategies rely on the tedious process of associating the natural images with a corresponding external 3D shape – something we refer to as “3D supervision”. In this talk, I will discuss recent efforts my group has been taking to train a geometric reasoning system using solely 2D supervision. By inferring the 3D shape solely from 2D labels we can ensure that all geometric variation in the training images is learned. This innovation sets the ground work for training the next generation of reliable geometric reasoning AIs needed to solve emerging needs in: autonomous transport, disaster relief, and endangered species preservation.Bio
Simon Lucey (Ph.D.) is an associate research professor within the Robotics Institute at Carnegie Mellon University, where he is part of the Computer Vision Group, and leader of the CI2CV Laboratory. Since 2017, he is also a principal scientist at Argo AI. Before this he was an Australian Research Council Future Fellow at the CSIRO (Australia's premiere government science organization) for 5 years. Simon’s research interests span computer vision, robotics, and machine learning. He enjoys drawing inspiration from vision researchers of the past to attempt to unlock computational and mathematic models that underly the processes of visual perception.
Symposium Speakers