Math-AI: Contributed Talks & Posters

NeurIPS 2024 · Read our full coverage  
Workshop

The Math-AI workshop ended with three contributed talks (by the workshop’s best-paper-award-winning authors), and a poster session.

David Brandfonbrener presented VerMCTS: Synthesizing Multi-Step Programs using a Verifier, a Large Language Model, and Tree Search.

A speaker addressing an audience

David Brandfonbrener

In VerMCTS, a program verifier is used as a reward signal for monte carlo tree search (MCTS). By applying the verifier to partial programs which are iteratively expanded, LLMs can be effectively partnered with formal verifiers to generate verified programs.

Sean Welleck presented Inference Scaling Laws: An Empirical Analysis of Compute-Optimal Inference for LLM Problem-Solving.

A speaker addressing an audience

Sean Welleck

This paper centres around the question: how do we best use a given inference compute budget?

It does this by considering three key considerations:

  • What is the optimal model size?
  • What is the best meta-generation strategy?
  • If compute limits are removed, how far can inference strategies take us?

Interestingly, the best answer isn’t to always use the largest possible model. Welleck noted that “smaller models with advanced inference are often optimal”, and pointed the audience to his meta-generation tutorial from earlier this week.

Nelson Vadori presented Learning Mathematical Rules with Large Language Models.

A speaker addressing an audience

Nelson Vadori

This paper assesses generalisation abilities of LLMs by fine-tuning open-source LLMs on synthetic data.

It concludes that fine-tuning models on some specific mathematical rules allows them to be reused in the context of word problems (bottom-up generalisation) and training on a large and diverse set of tokens improves the models’ ability to generalize specific rules such as distributivity and manipulating equations (top-down generalisation).


This final set of talks was followed by a poster session.

Several researchers watching another researcher point at their poster titled "Formal Theorem Proving by Rewarding LLMs
to Decompose Proofs Hierarchically"

Poster session

There was a varied selection of posters including topics like ML for formal mathematics, informal mathematics, and applications in mathematical physics, as well as some new benchmarks.

For a full list of accepted papers, see the workshop website.

There will be one more post on this blog with some highlights from Sunday’s workshops, followed by an overall conference highlights post.