Invited Talks: Zico Kolter & Song-Chun Zhu

ICLR 2025 · Read our full coverage of this year's conference  

Zico Kolter: Building Safe and Robust AI Systems

Invited Talk

In this year’s first invited talk, Zico Kolter started with a look back to the work presented at ICLR 2015 ten years ago. Out of a total of just 31 main-conference papers that year, several had a major impact on the field from today’s perspective — including the Adam optimiser and Neural Machine Translation papers, which won the Test of Time Award and will be discussed in more depth later this week.

Zico Kolter holding a microphone and addressing the audience.

Zico Kolter

Kolter presented years of his lab’s work through four eras: optimisation, certified adversarial robustness, empirics of deep learning, and AI Safety, and highlighted two recent pieces of work in the AI Safety category: antidistillation sampling (generating text from a model in a way that makes distillation harder but outputs are still generally useful) and safety pretraining (methods for incorporating safety guardrails early on in the model training process, not just in post-training).

Kolter ended with a call to action suggesting that AI Safety should be a key area of focus for academic research today, and emphasising his expectation that work in this area will have a significant impact on the future development of the field.

Song-Chun Zhu: Framework, Prototype, Definition and Benchmark

Invited Talk

Song-Chun Zhu’s talk started from a philosophical vantage point, with a reflection on how “AGI” might be defined, and how any such definition hinges on the definition of what it means to be human.

A large, packed hall with an audience watching Song-Chun Zhu's talk. At the front, multiple large screens show presentation slides and enlarged images of the speaker.

Song-Chun Zhu presenting in Hall 1

Zhu then explored the space of cognitive agents through his three-dimensional framework which considers up of the agent’s cognitive architecture (how the agent works), its potential functions (what it can do), and its value function (what it wants to do).

He also summarised some of his lab’s research, including the development of TongTong, an agent trained in a simulated physical environment, as well as the Tong Test benchmark aimed at evaluating AGI.