CRV 2012 Invited Speakers
Monday, May 28th, 9:00-10:00 am
Oliver Brock, Technische Universität Berlin
Abstract: Perception for Interaction: From the perspective of a roboticist, perception must enable interaction with the world (not just action in the world). And from the perspective of a computer vision researcher, it might seem that geometric models of the environment go a long way towards accomplishing this goal. I will argue that robust perception for interaction should not exclusively consider geometry of the manipulated objects. Instead, it must be tailored to the embodiment performing the interaction, the objects affected by the interaction, and the characteristics of the interaction. Obviously, the success of an interaction will depend on all three of these aspects. It therefore seems problematic to ignore any one of them. I will illustrate in the context of manipulation and grasping how this view can lead to robust and efficient perception.
Bio: Oliver Brock is the Alexander von Humboldt professor of robotics at the Technische Universität Berlin. He was an assistant and associate professor of computer science at the University of Massachusetts Amherst from 2002 to 2009. He received his computer science diploma in 1993 from the Technische Universität Berlin and his Masters and Ph.D. degrees in computer science from Stanford University in 1994 and 2000, respectively. He was co-founder and CTO of AllAdvantage.com. He also held post-doc positions at Rice University and Stanford University. Oliver Brock's research focuses on mobile manipulation, interactive perception, manipulation learning, and the application of robotic algorithms to problems in structural molecular biology.
Tuesday, May 29th, 9:00-10:00 am
Abstract: The last 30 years have seen tremendous progress on the structure-from-motion problem, especially on globally rigid scenes. Early work on minimal point configurations has turned into systems for city-scale reconstruction, with millions of points and hundreds of thousands of views. However, less is known about how to solve for structure-from-motion when the scene is not rigid. Non-rigidity covers a broad spectrum of phenomena, including deforming surfaces, articulated structures, groups of rigidly-moving bodies, and any combination thereof. We introduce locally-rigid motion, which is a form of non-rigid motion in which spatially local structures can be modelled as rigid. The key idea is to first solve many local three-point, N-view rigid problems independently, under the assumption of orthography. This provides a "soup" of specific, plausibly rigid, 3D triangles. Triangles from this soup are then combined into larger non-rigid structures. We compare the results of this approach with other recent techniques.
Bio: Allan Jepson received his B.Sc. in 1976 from the University of British Columbia and his Ph.D. in Applied Mathematics in 1980 from the California Institute of Technology. He then moved to a postdoctoral position in the Mathematics Department at Stanford University. In 1982 he joined the faculty at the Department of Computer Science at the University of Toronto, where he has been a full professor since 1991. His current research interests span a range of topics in computer vision, including image grouping, image segmentation, and motion estimation.
Wednesday, May 30th, 1:30-2:30 pm
Abstract: I shall talk about long term vast scale navigation. What things must we do, what questions must we pose, to have a robot navigate precisely over days, weeks and hundreds of miles? How might a robot get better at doing that over time and in teams? Can we envision life long learning for a robot? What sensors and representations are apt? And of course - why might we want a robot to have such competencies? Could that be valuable? Of course it could!
Bio: Prof. Paul Newman obtained an M.Eng. in Engineering Science from Oxford University in 1995. He then undertook a Ph.D. in autonomous navigation at the Australian Center for Field Robotics, University of Sydney, Australia. In 1999 he returned to the United Kingdom to work in the commercial sub-sea navigation industry. In late 2000 he joined the Dept of Ocean Engineering at M.I.T. where as a post-doc and later a research scientist, he worked on algorithms and software for robust autonomous navigation for both land and sub-sea agents. In early 2003 he returned to Oxford as a Departmental Lecturer in Engineering Science before being appointed to a University Lectureship in Information Engineering and becoming a Fellow of New College in 2005 and a Professor of Engineering Science in 2010. Over the course of his career he has developed a particular expertise in the application of probabilistic methods to robotic navigation and mapping. His focus lies on pushing the boundaries of navigation techniques in terms of both endurance and scale. The mobile robotics group which he leads enjoys collaborations with many industrial parties which provide exploitation opportunities to drive the research. Recently the group has developed a keen focus on intelligent transport (a.k.a., cars that drive themselves). He is on the editorial board for the top two robotics science journals (IJRR, and JFR), was an IEEE Distinguished Lecturer for Europe in 2009 and 2010 and is the IEEE media spokesman for robotics and automation. In 2012 he will be Program Chair for Robotics Science and Systems and in 2010 he was awarded an EPSRC Leadership Fellowship.