Research
In early infancy, babies learn to track moving objects, gaze at unlikely events, and develop an understanding of object permanence and solidity. Through this process, they acquire knowledge about the categories, functions, motions, and intuitive physics of elements in the environment.
Can artificial vision systems achieve a comparable level of visual understanding by observing the world through videos? This question naturally involves learning world models from continuous visual observations with minimal supervision.
Therefore, my research interests reside at the intersection of self-supervised learning from videos, understanding structures of the physical world, and human visual perception (e.g. visual abstraction).
|
(* indicates equal contribution)
|
|
SEVA: Leveraging sketches to evaluate alignment between human and machine visual abstraction
Kushin Mukherjee*, Holly Huey*, Xuanchen Lu*, Yael Vinker, Rio Aguina-Kang, Ariel Shamir, Judith E Fan
Advances in Neural Information Processing Systems (NeurIPS), Datasets and Benchmarks Track, 2023
arXiv /
website /
video /
code /
dataset /
|
|
Learning Dense Correspondences between Photos and Sketches
Xuanchen Lu, Xiaolong Wang, Judith E Fan
International Conference on Machine Learning (ICML), 2023
arXiv /
website /
code /
dataset /
|
|
Visual explanations prioritize functional properties at the expense of visual fidelity
Holly Huey, Xuanchen Lu, Caren M Walker, Judith E Fan
Cognition, 2023
paper /
journal /
dataset /
|
|
Evaluating machine comprehension of sketch meaning at different levels of abstraction
Kushin Mukherjee, Xuanchen Lu, Holly Huey, Yael Vinker, Rio Aguina-Kang, Ariel Shamir, Judith E Fan
Proceedings of the 45th Annual Conference of the Cognitive Science Society (CogSci), 2023
paper /
dataset /
|
|
Evaluating machine comprehension of sketch meaning at different levels of abstraction
Xuanchen Lu, Kushin Mukherjee, Rio Aguina-Kang, Holly Huey, Judith E Fan
Journal of Vision, 2023
abstract /
|
|