Apple is sponsoring the International Conference on Machine Learning (ICML) 2024, which is taking place in person from July 21 to 27 in the Messe Wien Exhibition and Congress Center, Vienna Austria. ICML is globally renowned for presenting and publishing cutting-edge research on all aspects of machine learning used in closely related areas like artificial intelligence, statistics and data science, as well as important application areas such as machine vision, computational biology, speech recognition, and robotics. Below is the schedule of our sponsored workshops and events at ICML 2024.
Schedule
Stop by the Apple booth in Halle/Hall B, Booth #110, from 11:30 am - 6:45 pm CEST July 22; 10:00 am - 6:00 pm CEST on July 23 and 24.
Sunday, July 21
- Accelerating research in Private Federated Learning with the pfl-research simulation framework
- 2:30 pm - 3:30 pm CEST, Halle/Hall A8
- Filip Granqvist
Monday, July 22
- Queer in AI
- 9:00 am - 4:00 pm CEST, Stolz 2
- Isha Garg and Adam Golinski will be representing Apple.
- LatinX in AI
- 9:00 am - 4:00 pm CEST, Stolz 1
- Pau Rodriguez Lopez and Eduardo Martinez Montes will be representing Apple.
- LatinX in AI
- 5:30pm - 7:30pm CEST, The View Cafe-Bar Restaurant
- Pau Rodriguez Lopez and Eduardo Martinez Montes will be representing Apple.
Tuesday, July 23
- On the Minimal Degree Bias in Generalization on the Unseen for non-Boolean Functions
- 11:30 am - 1:00 pm CEST, Halle/Hall C 4-9
- Denys Pushkin (EPFL), Raphael Berthier (EPFL), Emmanuel Abbe (Apple/EPFL)
- Improved Modelling of Federated Datasets using Mixtures-of-Dirichlet-Multinomials
- 1:30 pm - 3:00 pm CEST, Halle/Hall C 4-9
- Jonny Scott (Institute of Science and Technology Austria), Aine Cahill
- Women in Machine Learning (WiML)
- 6:30 pm - 9:00 pm CEST, Badeschiff Wien
- Michel Wong and Isha Garg will be representing Apple.
Wednesday, July 24
- Women in Machine Learning (WiML)
- 9:00 am - 4:00 pm CEST, Schubert 1-6
- Catherine Vilhauer, Pau Rodriguez Lopez, and Tillie Hands will be representing Apple.
- Whispering Experts: Neural Interventions for Toxicity Mitigation in Language Models
- 11:30 am - 1:00 pm CEST, Halle/Hall C 4-9
- Xavier Suau Cuadros, Pieter Delobelle (KU Leuven), Rin Metcalf Susa, Armand Joulin (Google Deepmind (Work done while at Apple)), Nick Apostoloff, Luca Zapella, Pau Rodriguez Lopez
- On a Practical Implementation of Brenier’s Polar Factorization Theorem and Its Applications to Optimization and Sampling
- 11:30 am - 1:00 pm CEST, Halle/Hall C 4-9
- Marco Cuturi Cameto, Nina Vesseron (ENSAE)
- How Smooth Is Attention?
- 11:30 am - 1:00 pm CEST, Halle/Hall C 4-9
- Valérie Castin (ENS), Pierre Ablin, Gabriel Peyré (ENS)
- Scalable Pre-training of Large Autoregressive Image Models
- 1:30 pm - 3:00 pm CEST, Halle/Hall C 4-9
- Alaaeldin Mohamed Elnouby Ali, Michal Klein, Shuangfei Zhai, Miguel Angel Bautista Martin, Josh Susskind, Armand Joulin (Google Deepmind (Work done while at Apple))
- Data-free Bootstrapping Distillation
- 1:30 pm - 3:00 pm CEST, Halle/Hall C 4-9
- Jiatao Gu, Shuangfei Zhai, Yizhe Zhang, Lingjie Liu (University of Pennsylvania), Josh Susskind
Thursday, July 25
- KV-Runahead: Scalable Causal LLM Inference by Parallel Key-Value Cache Generation
- 11:30 am - 1:00 pm CEST, Halle/Hall C 4-9
- Minsik Cho, Mohammad Rastegari (Meta (Work done while at Apple)), Devang Naik
- Knowledge Transfer from Vision Foundation Models for Efficient Training of Small Task-specific Models
- 11:30 am - 1:00 pm CEST, Halle/Hall C 4-9
- Raviteja Vemulapalli, Hadi Pour Ansari, Fartash Faghri, Sachin Mehta, Mehrdad Farajtabar, Mohammad Rastegari (Meta (Work done while at Apple)), Oncel Tuzel
- Private Vector Mean Estimation in the Shuffle Model: Optimal Rates Require Many Messages
- 11:30 am - 1:00 pm CEST, Halle/Hall C 4-9
- Hilal Asi, Vitaly Feldman, Jelani Nelson (University of California Berkeley), Kunal Talwar, Huy Nguyen (Northeastern University), Samson Zhou (Texas A&M University)
- Optimization without Retraction on the Random Generalized Stiefel Manifold
- 1:30 pm - 3:00 pm CEST, Halle/Hall C 4-9
- Simon Vary (UC Louvain), Pierre Ablin, Bin Gao (Chinese Academy of Sciences), Pierre-Antoine Absil (UC Louvain)
- Swallowing the Bitter Pill: Simplified Scalable Conformer Generation
- 1:30 pm - 3:00 pm CEST, Halle/Hall C 4-9
- Yuyang Wang, Ahmed Elhag (University of Oxford), Navdeep Jaitly, Josh Susskind, Miguel Angel Bautista Martin
- Superposition Prompting: Improving and Accelerating Retrieval-Augmented Generation
- 1:30 pm - 3:00 pm CEST, Halle/Hall C 4-9
- Thomas Merth, Qichen Fu, Mohammad Rastegari (Meta (Work done while at Apple)), Mahyar Najibi
- Executable Code Actions Elicit Better LLM Agents
- 1:30 pm - 3:00 pm CEST, Halle/Hall C 4-9
- Xingyao Wang (University of Illinois Urbana-Champaign), Yangyi Chen (University of Illinois Urbana-Champaign), Lifan Yuan (University of Illinois Urbana-Champaign), Yizhe Zhang, Hao Peng (University of Illinois Urbana-Champaign), Ji Heng (University of Illinois Urbana-Champaign)
- Careful with that Scalpel: Improving Gradient Surgery with an EMA
- 1:30 pm - 3:00 pm CEST, Halle/Hall C 4-9
- Pierre Ablin, James Thornton, Eugene Ndiaye, Yu-Guan Hsieh, Michal Klein, Marco Cuturi Cameto
- Contrasting Multiple Representations with the Multi-Marginal Matching Gap
- 1:30 pm - 3:00 pm CEST, Halle/Hall C 4-9
- Zoe Piran, Michal Klein, James Thornton, Marco Cuturi Cameto
- Contrasting Multiple Representations with the Multi-Marginal Matching Gap
- 5:15 pm - 5:30 pm CEST, Halle/Hall C 4-9, Oral 6x Representation Learning 2
- Zoe Piran, Michal Klein, James Thornton, Marco Cuturi Cameto
Friday, July 26
- Efficient Systems for Foundation Models Workshop 2024
- 9:00 am CEST, Lehar 2
- Revealing the Utilized Rank of Subspaces of Learning in Neural Networks
- Isha Garg, Eshan Verma, Daniel Ulbricht, Christian Koguchi
- Workshop on Foundation Models in the Wild 2024
- 9:00 am CEST, Straus 1
- Aligning Text-to-Image Diffusion as GFlowNets
- Dinghuai Zhang (University of Montreal), Yizhe Zhang, Jiatao Gu, Ruixiang Zhang, Josh Susskind, Navdeep Jaitly, Shuangfei Zhai
- Projected Language Models: A Large Model Pre-Segmented Into Smaller Ones
- David Grangier, Angelos Katharopoulos, Pierre Ablin, Awni Hannun
- AI for Science: Scaling in AI for Scientific Discovery
- 9:00 am CEST, Halle/Hall C 4-9
- Swallowing the Bitter Pill: Simplified Scalable Conformer Generation
- Yuyang Wang, Ahmed Elhag (University of Oxford), Navdeep Jaitly, Josh Susskind, Miguel Angel Bautista Martin
- ES-FoMo II: 2nd Workshop on Efficient Systems for Foundation Models
- 9:00 am CEST, Lehar 2
- OpenELM: An Efficient Language Model Family with Open Training and Inference Framework
- Sachin Mehta, Mohammad Sekhavat, Qingqing Cao, Max Horton, Yanzi Jin, Frank Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal (Work done while at Apple), Mohammad Rastegari (Meta (Work done while at Apple))
Accepted Papers
Aligning Text-to-Image Diffusion as GFlowNets Dinghuai Zhang (University of Montreal), Yizhe Zhang, Jiatao Gu, Ruixiang Zhang, Josh Susskind, Navdeep Jaitly, Shuangfei Zhai
Careful with that Scalpel: Improving Gradient Surgery with an EMA Pierre Ablin, James Thornton, Eugene Ndiaye, Yu-Guan Hsieh, Michal Klein, Marco Cuturi Cameto
Contrasting Multiple Representations with the Multi-Marginal Matching Gap Zoe Piran, Michal Klein, James Thornton, Marco Cuturi Cameto
Data-free Bootstrapping Distillation Jiatao Gu, Shuangfei Zhai, Yizhe Zhang, Lingjie Liu (University of Pennsylvania), Josh Susskind
Executable Code Actions Elicit Better LLM Agents Xingyao Wang (University of Illinois Urbana-Champaign), Yangyi Chen (University of Illinois Urbana-Champaign), Lifan Yuan (University of Illinois Urbana-Champaign), Yizhe Zhang, Hao Peng (University of Illinois Urbana-Champaign), Ji Heng (University of Illinois Urbana-Champaign)
How Smooth Is Attention? Valérie Castin (ENS), Pierre Ablin, Gabriel Peyré (ENS)
Improved Modelling of Federated Datasets using Mixtures-of-Dirichlet-Multinomials Jonny Scott (Institute of Science and Technology Austria), Aine Cahill
Knowledge Transfer from Vision Foundation Models for Efficient Training of Small Task-specific Models Raviteja Vemulapalli, Hadi Pour Ansari, Fartash Faghri, Sachin Mehta, Mehrdad Farajtabar, Mohammad Rastegari (Meta (work done while at Apple)), Oncel Tuzel
KV-Runahead: Scalable Causal LLM Inference by Parallel Key-Value Cache Generation Minsik Cho, Mohammad Rastegari (Meta (work done while at Apple)), Devang Naik
On a Practical Implementation of Brenier’s Polar Factorization Theorem and Its Applications to Optimization and Sampling Marco Cuturi Cameto, Nina Vesseron (ENSAE)
On the Minimal Degree Bias in Generalization on the Unseen for non-Boolean Functions Denys Pushkin (EPFL), Raphael Berthier (EPFL), Emmanuel Abbe (Apple/EPFL)
OpenELM: An Efficient Language Model Family with Open Training and Inference Framework Sachin Mehta, Mohammad Sekhavat, Qingqing Cao, Max Horton, Yanzi Jin, Frank Sun, Iman Mirzadeh, Mahyar Najibi, Dmitry Belenko, Peter Zatloukal (work done while at Apple), Mohammad Rastegari (Meta (work done while at Apple))
Optimization without Retraction on the Random Generalized Stiefel Manifold Simon Vary (UC Louvain), Pierre Ablin, Bin Gao (Chinese Academy of Sciences), Pierre-Antoine Absil (UC Louvain)
Private Vector Mean Estimation in the Shuffle Model: Optimal Rates Require Many Messages Hilal Asi, Vitaly Feldman, Jelani Nelson (University of California Berkeley), Kunal Talwar, Huy Nguyen (Northeastern University), Samson Zhou (Texas A&M University)
Projected Language Models: A Large Model Pre-Segmented Into Smaller Ones David Grangier, Angelos Katharopoulos, Pierre Ablin, Awni Hannun
Revealing the Utilized Rank of Subspaces of Learning in Neural Networks Isha Garg, Eshan Verma, Daniel Ulbricht, Christian Koguchi
Scalable Pre-training of Large Autoregressive Image Models Alaaeldin Mohamed Elnouby Ali, Michal Klein, Shuangfei Zhai, Miguel Angel Bautista Martin, Josh Susskind, Armand Joulin (Google Deepmind (work done while at Apple))
Superposition Prompting: Improving and Accelerating Retrieval-Augmented Generation Thomas Merth, Qichen Fu, Mohammad Rastegari (Meta (work done while at Apple)), Mahyar Najibi
Swallowing the Bitter Pill: Simplified Scalable Conformer Generation Yuyang Wang, Ahmed Elhag (University of Oxford), Navdeep Jaitly, Josh Susskind, Miguel Angel Bautista Martin
Whispering Experts: Neural Interventions for Toxicity Mitigation in Language Models Xavier Suau Cuadros, Pieter Delobelle (KU Leuven), Rin Metcalf Susa, Armand Joulin (Google Deepmind (work done while at Apple)), Nick Apostoloff, Luca Zapella, Pau Rodriguez Lopez
Demos
MLX
We are demonstrating large model inference and training on device using MLX. MLX is a flexible array framework that is optimized for Apple silicon, and brought to you by Apple Machine Learning Research. It enables training and inference of arbitrarily complex models on Apple silicon powered devices with great brevity and flexibility.
In this demo we showcase fine-tuning of a 7B parameter LLM on an iPhone, image generation using a large diffusion model on an iPad, and text generation using a number of large language models on a M2 Ultra Mac Studio and M3 Macbook Pro.
Private Federated Learning (PFL)
This demo showcases Apple’s Private Federated Learning (PFL) technology. PFL-research is the open-source framework enabling this technology for research simulations, open sourced in March 2024 at the Apple PPML Workshop. Here we are able to show how Siri can play Music and Podcasts on iPhones, which leverages several technologies such as Siri Signals and Siri Inference, Private Federated Learning (PFL) and Differential Privacy (DP). This highlights a user-facing feature in iOS that was shipped, thanks to this framework.
Acknowledgements
Devon Hjelm, Ozan Sener, Pau Rodriguez Lopez, and Tatiana Likhomanenko are Area Chairs for ICML 2024.
Aadirupa Saha is a Co-Organizer for the Models of Human Feedback for AI Alignment workshop.
Rin Metcalf Susa is a Panelist on the Models of Human Feedback for AI Alignment workshop.
Marco Cuturi, Samy Bengio, and Vladlen Kotlun are Senior Meta Reviewers for ICML 2024.
Arno Blaas, Bailin Wang, Fartash Faghri, Gustaf Ahdritz, Hilal Asi, Junpei Zhou, Luca Zappella, Miguel Angel Bautista Martin, Miguel Sarabia del Castillo, Parikshit Gopalan, Qichen Fu, Ray Zhang, Richard Bai, Rin Metcalf Susa, Vimal Thilak, Xavier Suau Cuadros, Yu-Guan Hsieh, Yuyang Wang, and Zoe Piran are reviewers for ICML 2024.
Natalie Schluter is a Senior Workshop Chair for ICML 2024.
Related readings and updates.
Empirical Methods in Natural Language Processing (EMNLP) 2024
Apple is presenting new research at the Empirical Methods in Natural Language Processing (EMNLP) conference, which takes place in person in Miami, Florida, from November 12 - 16. We are proud to again sponsor the conference, which brings together the scientific and industrial research communities around natural language processing and artificial intelligence. Below is an overview of Apple’s participation at EMNLP 2024.
International Conference on Learning Representations (ICLR) 2024
Apple sponsored the International Conference on Learning Representations (ICLR), which took place in person from May 7 to 11 in Vienna Austria. ICLR brings together professionals dedicated to the advancement of deep learning.