Bolei Zhou
Assistant Professor
Computer Science Department, University of California, Los Angeles
Office: 295D, Engineering VI, UCLA
My research is at the intersection of computer vision and machine autonomy, with a focus on developing interpretable and generalizable embodied AI that aligns with humans. I am also interested in understanding various human-centric properties of current AI models beyond their accuracy, such as explainability, interpretability, steerability, generalization, and safety. Some of the earlier works I co-authored are Class Activation Mapping (CAM), Places, ADE20K, Network Dissection.
See MetaDriverse for recent work on machine autonomy and GenForce for recent work on generative modeling.
News
Feb 13, 2024 | Thank NSF for supporting our research via NSF CAREER Award. |
---|---|
Sep 12, 2023 | Thank Intel for supporting our research via Intel’s 2023 Rising Star Faculty Award. |
Apr 26, 2023 | Invited talks at workshops on coPerception: collaborative perception and learning at ICRA’23, end-to-end autonomous driving at CVPR’23, and secure and safe autonomous driving at CVPR’23, and BIRS workshop on 3D generative models. |
Mar 24, 2023 | Grateful to receive NSF award for supporting our research of developing our MetaDrive driving simulator into MetaDriverse, an open-source infrastructure for AI research on autonomous driving. |
Jan 9, 2023 | I summarize our effort of going from Network Dissection to Policy Dissection in the talk Discovering Interpretable Concepts in Deep Representations at IPAM Workshop on Explainable AI for the Sciences: Towards Novel Insights. |
Selected Publications
- ICLR SpotlightGuarded Policy Optimization with Imperfect Online DemonstrationsInternational Conference on Learning Representations (ICLR Spotlight) , 2023