|go to week of Dec 30, 2012||30||31||1||2||3||4||5|
|go to week of Jan 6, 2013||6||7||8||9||10||11||12|
|go to week of Jan 13, 2013||13||14||15||16||17||18||19|
|go to week of Jan 20, 2013||20||21||22||23||24||25||26|
|go to week of Jan 27, 2013||27||28||29||30||31||1||2|
Abstract: The principle of maximum entropy provides a powerful framework for estimating joint, conditional, and marginal probability distributions. Markov random fields and conditional random fields can be viewed as the maximum entropy approach in action. However, beyond joint and conditional distributions, there are many other important distributions with elements of interaction and feedback where its applicability has not been established. In this talk, I will present the principle of maximum causal entropy—an approach based on directed information theory for estimating an unknown process based on its interactions with a known process.
Bio: Brian Ziebart is an Assistant Professor in the Department of Computer Science at the University of Illinois at Chicago. He received his PhD in Machine Learning from Carnegie Mellon University in 2010, where he was also a postdoctoral fellow. He also holds a B.S. in Computer Engineering (highest honors) from the University of Illinois, Urbana-Champaign. His research interests include machine learning, decision theory, game theory, robotics, and assistive technologies.