I am an Assistant Professor in the
Robotics Center and
School of Computing at the University of Utah. Previously, I was a postdoc at UC Berkeley working with
Anca Dragan and
Ken Goldberg on human-in-the-loop robot learning.
I received my PhD in 2020 from the CS department at UT Austin where I was advised by
Scott Niekum.
My research focuses on robot learning under uncertainty, reward inference from human input, and AI safety.
In particular, I've worked on methods that give a robots and other autonomous agents the ability to provide
high-confidence bounds on performance when learning a policy from a limited number of demonstrations,
ask risk-aware queries to better resolve ambiguities and perform
robust policy optimization from demonstrations,
learn more efficiently from
informative demonstrations,
learn from
ranked suboptimal demonstrations,
even
when rankings are unavailable,
and perform
fast Bayesian reward inference for visual control tasks.