|go to week of Apr 28, 2013||28||29||30||1||2||3||4|
|go to week of May 5, 2013||5||6||7||8||9||10||11|
|go to week of May 12, 2013||12||13||14||15||16||17||18|
|go to week of May 19, 2013||19||20||21||22||23||24||25|
|go to week of May 26, 2013||26||27||28||29||30||31||1|
“Dynamic Scene Understanding for Driver Assistance”
Prof. James M. Rehg, Georgia Institute of Technology
In this talk I will review our current progress in developing methods for dynamic scene understanding that can support new forms of driver assistance. Our approach is based purely on visual information collected from one or more vehicle-mounted cameras. I will begin by reviewing the challenges that arise in geometric reconstruction of the vehicle environment when the scene consists not only of static objects (buildings, bridges, etc.) but also contains independently moving objects such as cars or people. I will present an approach to simultaneously recovering the scene geometry and vehicle motion which can address the presence of additional independently moving objects. While geometric models of the scene are useful for route-planning and driver safety, additional semantic information about the scene is needed to support driver assistance. Examples of semantic information include the name and function of nearby buildings, the identification of navigation landmarks, the detection of entrances and exits, and so forth. I will describe an approach to combining pixel-level semantic labels with 3D geometry to support reasoning about the semantics and structure of the vehicle environment. I will show preliminary results on some standard datasets. In order for video analysis processes to co-exist within a heterogenous task environment and meet the constraints of a real-time embedded platform, it is desirable to have flexible, resource-aware analysis techniques which can adapt to the available system resources on-the-fly. We are developing a principled approach to the construction of resource-aware vision algorithms. I will present preliminary results from an incremental approach to feature learning.
This is joint work with Yin Li and Abhijit Kundu.
James M. Rehg (pronounced "ray") is a Professor in the School of Interactive Computing at the Georgia Institute of Technology, where he is the Director of the Center for Behavior Imaging, co-Director of the Computational Perception Lab, and Associate Director of Research in the Center for Robotics and Intelligent Machines. He received his Ph.D. from CMU in 1995 and worked at the Cambridge Research Lab of DEC (and then Compaq) from 1995-2001, where he managed the computer vision research group. He received the National Science Foundation (NSF) CAREER award in 2001, and the Raytheon Faculty Fellowship from Georgia Tech in 2005. He and his students have received a number of best paper awards, including best student paper awards at ICML 2005 and BMVC 2010. Dr. Rehg is active in the organizing committees of the major conferences in computer vision, most-recently serving as the General co-Chair for IEEE CVPR 2009, and as a Program co-Chair for ACCV 2012. He has served on the Editorial Board of the International Journal of Computer Vision since 2004. He has authored more than 100 peer-reviewed scientific papers and holds 23 issued US patents. Dr. Rehg is currently leading a multi-institution effort to develop the science and technology of Behavior Imaging, funded by an NSF Expedition award (see www.cbs.gatech.edu for details).
Organized by the ISTC-EC Seminar Committee
Priya Narasimhan, Carnegie Mellon (Co-Chair)
Jeff Parkhurst, Intel Labs (Co-Chair)
Ahmed Al Maashri, Penn State University
John Schulman, University of California Berkeley
Glenn Ko, University of Illinois Urbana-Champaign
Minsung Jang, Georgia Institute of Technology
Ketan Bhardwaj, Georgia Institute of Technology
Kunal Mankodiya, Carnegie Mellon University
Jennifer Gabig, Carnegie Mellon University
Katerina Fragkiadaki, University of Pennsylvania
Yuan Tian, Cornell University