ConferencesICML

ICML 2025

Contact: Harald Carlens on Whova

Workshop Highlights: Foundation Models for Structured Data and AI for Math

ICML 2025 wrapped up with 33 workshops spread across two days. Workshops allow researchers to share newer work in a less formal environment than the main conference and each workshop focuses on a specific domain or area of research.

Based on anticipated attendance numbers in the conference app, the three most popular workshops across the two days were Multi-Agent Systems in the Era of Foundation Models: Opportunities, Challenges and Futures, Exploration in AI Today (EXAIT), and Foundation Models for Structured Data (FMSD).

Below are a few brief highlights from two of the workshops.

Foundation Models for Structured Data

Workshop on Foundation Models for Structured Data

This was the first ICML workshop on Foundation Models for Structured Data. It covered a broad range of topics related to pre-trained models for tabular and time-series data.

There was a generally-shared view that foundation models for structured data are still in their infancy, with many promising directions for further work.

Andrew Gordon Wilson’s talk (“A Universal Approach to Model Construction”) included some advice on model selection (embrace a highly expressive hypothesis space in combination with a compression bias). He questioned the view that deep learning is ‘special’ compared to other machine learning approaches, and suggested that the success of overparameterisation observed in phenomena like double descent is not unique to deep learning.

For more on this view, see his ICML 2025 position paper Position: Deep Learning is Not So Mysterious or Different.

A screenshot of the first page of the paper "Position: Deep Learning is Not So Mysterious or Different"

Andrew Gordon Wilson's position paper

Josh Gardner’s talk (“Toward the GPT-3 Moment for Tabular Data Models”) reviewed the progress made in the first three GPT models, and attributed their success to three main factors (large-scale data, reliable benchmarks, and scalability) before going on to evaluate the state of these factors for tabular foundation models.

The talk noted that there’s no equivalent to CommonCrawl for tabular data (yet), and that much of the large-scale tabular data is synthetic (for example, TabPFN is entirely trained on synthetic data). Currently most benchmarks focus on “single-table” prediction, and there is a need for more tabular benchmarks aimed at foundation modelling or few-shot/in-context learning.

He also highlighted some misconceptions, coining the phrase “The Token Fallacy,” referring to the common belief that “models that tokenise numbers cannot effectively represent them”, as well as reminding researchers of the importance of building with exponentially improving compute in mind.

At the end of the workshop, the organisers gave out three best paper awards:

AI for Math

AI for Math Workshop

This was the second year of the AI for Math workshop at ICML (summary of the previous ICML AI for math workshop), alongside a similar series of workshops at NeurIPS (NeurIPS 2024 Math-AI workshop coverage).

One recurring theme throughout this workshop was the high-level choice of research direction: does the community want to build systems for fully autonomous mathematical research, or tools to support human reasoning and decision-making?

Some recent work discussed in the workshop included Goedel-prover-v2, a new state-of-the-art open-weights model for proving theorems in Lean, APE-Bench I, a new proof engineering benchmark, and CSLib, a new open-source Lean 4 library for foundational results in computer science, as well as an update on the AI Mathematical Olympiad.

There were two competition tracks in this workshop:

  • Track 1, proof engineering (APE-Bench I), was won by Sparsh Tewadia, using Gemini 2.5.
  • Track 2, reasoning from physics diagrams (SeePhys), was won by Ruitao Wu, Hao Liang, Bohan Zeng, Junbo Niu, Wentao Zhang, and Bin Dong, using a combination of Gemini 2.5 and OpenAI o3.

There were two best paper awards:

Test of Time Award: Batch Normalization
Test Of Time

The ICML 2025 test of time award (for an impactful paper from ICML 2015) went to Sergey Ioffe and Christian Szegedy for their paper which introduced the Batch Normalization (“batch norm”) procedure, developed while the authors were both at Google.

Batch norm normalises inputs to intermediate neural net layers — i.e., transforms them so each feature has zero mean and unit variance. At train time, normalisation is done on a per-minibatch basis. At inference time, normalisation is done using the population statistics of the training set.

"Screenshot of the first page of the Batch Normalization paper"

The ICML 2015 Batch Normalization paper

After receiving the award, Sergey Ioffe gave a talk, starting with an explanation of the initial motivation behind the paper: figuring out why ReLUs worked better than sigmoids as activation functions, and what it would take for sigmoids to work well.

Impact

While the paper did end up enabling the more widespread use of sigmoid activation functions1, the impact was more profound: batch norm enabled significantly faster and more efficient training, and was followed by other normalisation procedures including group norm, weight norm, and instance norm, as well as layer norm, which was a key component of the 2017 Transformer architecture.

Figure 2 in the batch norm paper shows a 14-fold training speed increase over the baseline Inception model, which took around 30 days to train on a cluster of CPUs. This speedup was achieved by adding batch norm and increasing the learning rate. (the use of a higher learning rate was enabled by the addition of batch norm)

"A line chart with number of training steps on the x axis, and validation accuracy on the y axis, showing that the 'BN-x30' model with batch norm and a higher learning rate is much faster to reach a given level of accuracy than the baseline Inception model. "

The 14x training efficiency increase from batch norm + higher learning rate

Interpretation

This talk provided an interesting reminder of the empirical nature of deep learning research, as Ioffe described how the current understanding of why batch norm works is very different to the initial explanation of its mechanics.

He mentioned that while the 2015 paper attributed the success of batch norm to a reduction in “internal covariate shift”, later research showed that the improvements are due to a smoothing of the optimisation landscape. He pointed to the 2018 NeurIPS paper How Does Batch Normalization Help Optimization? by Santurkar et al for an explanation of this phenomenon.

Scale Invariance

Ioffe also presented some analysis with implications for scale-invariant models more generally.

Batch norm causes scale-invariance that results in implicit learning-rate scheduling, as the relative magnitude of the gradient gets smaller when training progresses and weights become larger.

This holds for general scale-invariant models, and there is an interaction between normalisation, weight decay, and learning rate scheduling. For more discussion of this phenomenon, Ioffe mentioned Layer Normalization (Ba et al, 2016) and L2 Regularization versus Batch and Weight Normalization (van Laarhoven, 2017).

Interactions

Ioffe ended the talk with a discussion of how, in batch norm, calculating statistics on the minibatch level creates additional interactions between training examples.

He pointed out that, while this can have undesirable side-effects in some cases, some more recent works make use of per-example interactions to improve training (notably SimCLR by Chen et al, 2020 and LongLlama by Tworkowski et al, 2023), and speculated that this could be a key mechanism employed to improve future models.

Footnotes

  1. As per the paper: “this enables the sigmoid nonlinearities to more easily stay in their non-saturated regimes, which is crucial for training deep sigmoid networks but has traditionally been hard to accomplish” ↩

Outstanding Papers & Test of Time Award Announced

There were six Outstanding Paper Award winners this year in the main track, and two in the position paper track.

Outstanding Papers (Main Track)

Outstanding Papers (Position Paper Track)

Test of Time Award

This year’s Test of Time Award goes to Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift by Sergey Ioffe and Christian Szegedy. This paper from 2015 introduced Batch Normalization, a technique to normalise inputs to neural nets in the batch dimension, enabling the use of higher learning rates and faster training. It is a forerunner of Layer Normalization, which performs a similar operation along the feature dimension and was used in the original Transformer paper. The authors of this paper will give a talk at 8:30am Vancouver time on Wednesday.

There were two honorable mentions:

The Trust Region Policy Optimization paper introduced the TRPO algorithm for reinforcement learning, a precursor to the now-ubiquitous Proximal Policy Optimization (PPO) algorithm.

A virtual welcome to Vancouver

ICML 2025 is now underway, starting with expo talks and tutorials. The main conference will run Tuesday-Thursday, followed by two days of workshops.

This year our coverage will be remote-only, and less extensive than for previous conferences.

For the in-person attendees, the venue is the Vancouver Convention Center. If you’re feeling a sense of deja vu, it might be because this venue has hosted many past ML conferences, including NeurIPS 2024 just seven months ago.

"The interior of a conference centre. A few researchers are walking around and talking to each other. Grey carpets, wooden slats on the ceiling. "

First floor of the vancouver convention center's west building (December 2024)

The convention center has two buildings: the East building, which is shared with the Pan Pacific Hotel, and the West building, which will host the exhibit hall and invited talks in its basement. An underground walkway links the two buildings. There is an interactive map on the convention center website.

đŸč Socials, happy hours, and dinners

Official conference socials taking place at the conference venue are marked with an asterisk. Most others require registration and will probably fill up quickly.

Monday 14th

Tuesday 15th

Wednesday 16th

Thursday 17th

Friday 18th

Saturday 19th

Something missing? Message me on the Whova conference app - search “Harald Carlens” under Attendees.