Fei-Fei Li’s talk packed out the largest hall in the venue. Having revolutionised the field of computer vision through the ImageNet dataset, Li now detailed moving from seeing to doing, through improved spatial intelligence.
Li stated that robots today are still pretty brittle, and there remain significant gaps between experiments and real-world applications of robotics. She gave an overview of several of her lab’s research projects focused on data-driven work in robotic learning, including multi-sensory datasets like ObjectFolder, which catalogues sight/touch/sound data for large numbers of everyday objects, as well as some of her recent World Labs work on creating coherent 3D world models.
The talk ended on a positive note, with Li affirming her belief that AI will augment humans, rather than replacing them.