Math-AI: Adam Wagner, Jeremy Avigad, James Zou

NeurIPS 2024 · Read our full coverage  
Workshop

Three more invited talks from the Math-AI workshop.

Google DeepMind’s Adam Zsolt Wagner presented a mathematician’s perspective, and spoke about using machine learning as a tool to find good constructions in mathematics — for example, to identify counterexamples to conjectures.

In contrast to the FunSearch method, which searches through the space of programs that generate solutions, Wagner’s proposed approach searches through solution space directly.

A speaker addressing an audience from behind a lectern.

Adam Zsolt Wagner

By choosing the right representation and tokenising it using a (small) transformer-based language model, and alternating between local search (making small changes to existing constructions) and global search (training a model on the best constructions and generating more constructions like them), interesting new constructions can be found that are difficult for mathematicians to find manually.

For more details, see Wagner’s PatternBoost paper.

Wagner’s conclusions:

  • Even the simplest learning methods can produce good results in mathematics
  • Slightly more complex algorithms can be useful tools for mathematicians
  • There are lots of low hanging fruits in the area, the next years should be really exciting!

Jeremy Avigad, director of the Hoskinson Center for formal mathematics at CMU, started his talk by presenting a history of automated reasoning and interactive theorem provers.

Avigad states that machine learning can help interactive theorem proving in the following ways: premise selection, search, and copilots.

Conversely, interactive theorem proving is useful for machine learning because it provides a clear measure of success (it’s hard, but correct proofs are verifiable).

A speaker addressing an audience from behind a lectern.

Jeremy Avigad

Avigad highlighted the importance of making AI mathematics tools useful to mathematicians, and mentioned miniCTX and ImProver as two research works with this direction in mind .

He also recommended Jason Rute’s talk on The Last Mile (pdf) for more on how to make AI theorem provers useful to real users.


Stanford’s James Zou proposed that instead of thinking of AI as a tool, it’s sometimes useful to think of it as a partner.

He gave an overview of two research projects done by his lab — The Virtual Lab, and SyntheMol.

In the former, a “virtual lab” was set up, consisting of a set of “LLM agents”. Each agent, built on top of GPT-4, is given a specific role (e.g. biologist) and access to domain-specific tools (e.g. AlphaFold). The lab then conducts large numbers of virtual “meetings”, with some input from human researchers.

This process was used to successfully design new SARS-CoV-2 nanobodies, which were then experimentally validated.

A speaker addressing an audience from behind a lectern.

James Zou

In SyntheMol, generative models were used to create “recipes” for new, easy-to-synthesize antibiotics (in contrast to some prior approaches, where new molecules were designed directly, but turned out to be hard to synthesize). Of the 70 recipes sent to a lab, 58 were successfully synthesised and experimentally validated against diverse bacteria.

These three invited talks gave an