Stochastic Taylor Derivative Estimator
OralFirst author Zekun Shi presented this paper; winner of a NeurIPS 2024 best paper award.
The paper introduces a new amortisation scheme for autodiff on higher-order derivatives, replacing the exponential scaling inherent in a naive approach with a method which scales linearly (in k, for derivatives of order k).
Learning Formal Mathematics From Intrinsic Motivation
OralFirst author Gabriel Poesia presented a new method to learn formal mathematics in the Peano environment through intrinsic motivation.
The approach uses a teacher-student setup to jointly learn to generate conjectures and prove theorems.
The “teacher” proposes new goals (conjectures), for which the “student” then attempts to find proofs through tree search.
Starting with only the basic axioms and driven only by intrinsic rewards, the system’s performance on extrinsic measures (success at proving basic human-written theorems) improved over the training process.
This method does not require any human data, and it seems likely that with more compute the method would continue to improve.