Rodolphe Jenatton

CTO at Bioptimus.

Contact: rj X bioptimus Y com (with X=@ and Y=.)

[Google Scholar] [dblp] [LinkedIn]

Short bio:

In 2011, I finished my Ph.D which I conducted within the Sierra Team of the Département d'Informatique of École Normale Supérieure. I had the chance to be co-supervised by Francis Bach and Jean-Yves Audibert. I then spent a great year as a postdoctoral researcher with Alexandre d'Aspremont at Ecole Polytechnique. From January 2013 until May 2014, I worked for Criteo where I was in charge of improving the statistical and optimization aspects of the ad prediction engine. Until April 2019, I worked as a senior machine learning scientist at Amazon, Berlin, focusing on online learning, Bayesian optimization and auto ML. After that, I was a senior research scientist in the Berlin Google DeepMind team until December 2023. I am now CTO at Bioptimus.

Research interests:

My research interests revolve around machine learning, statistics, optimization, sparsity, auto-ML and the uncertainty modelling in neural networks. I am more generally excited by how foundation models can help understand the inner workings of biology.

Recent reviewing activity:

Area chair for ICML 2021, 2022; NeurIPS 2020, 2023; ICLR 2021, 2022, 2023, 2024.

Publications:

Journal:

  • (2022) L. Carratino, M. Cissé, R. Jenatton, J.-P. Vert. On Mixup Regularization. Journal of Machine Learning Research, 23 (2022) 1-31. [pdf] [code]

  • (2022) J. Urquhart Allingham, F. Wenzel, Z. Mariet, B. Mustafa, J. Puigcerver, N. Houlsby, G. Jerfel, V. Fortuin, B. Lakshminarayanan, J. Snoek, D. Tran, C. Riquelme, R. Jenatton. Sparse MoEs meet Efficient Ensembles. Transactions on Machine Learning Research. [pdf] [code]

  • (2022) V. Fortuin, M. Collier, F. Wenzel, J. Allingham, J. Liu, D. Tran, B. Lakshminarayanan, J. Berent, R. Jenatton, E. Kokiopoulou. Deep Classifiers with Label Noise Modeling and Distance Awareness. Transactions on Machine Learning Research. [pdf] [code]

  • (2015) F. Fogel, R. Jenatton, F. Bach, A. d'Aspremont. Convex Relaxations for Permutation Problems. SIAM Journal on Matrix Analysis and Application, 36(4):1465-1488, 2015. [pdf]

  • (2015) R. Gribonval, R. Jenatton, F. Bach. Sparse and spurious: dictionary learning with noise and outliers. IEEE Transactions on Information Theory, 61(11):6298-6319. [ieee][pdf on arXiv]

  • (2015) R. Gribonval, R. Jenatton, F. Bach, M. Kleinsteuber and M. Seibert. Sample complexity of dictionary learning and other matrix factorizations. IEEE Transactions on Information Theory, 61(6):3469-3486. [ieee][pdf on arXiv]

  • (2012) R. Jenatton, A. Gramfort, V. Michel, G. Obozinski, E. Eger, F. Bach and B. Thirion. Multi-scale Mining of fMRI Data with Hierarchical Structured Sparsity. SIAM Journal on Imaging Sciences, 5(3):835-856, 2012. [pdf]

  • (2011) R. Jenatton*, J. Mairal*, G. Obozinski, F. Bach (*Contributed equally). Proximal Methods for Hierarchical Sparse Coding. Journal of Machine Learning Research, 12(Jul):2297-2334. [pdf]

  • (2011) J. Mairal*, R. Jenatton*, G. Obozinski, F. Bach (*Contributed equally). Convex and Network Flow Optimization for Structured Sparsity. Journal of Machine Learning Research, 12(Sep):2681-2720. [pdf]

  • (2011) R. Jenatton, J.-Y. Audibert and F. Bach. Structured Variable Selection with Sparsity-Inducing Norms. Journal of Machine Learning Research, 12(Oct):2777-2824. [pdf] [code]

Conference:

  • (2023) J. Kossen, M. Collier, B. Mustafa, X. Wang, X. Zhai, L. Beyer, A. Steiner, J. Berent, R. Jenatton, E. Kokiopoulou. Three Towers: Flexible Contrastive Learning with Pretrained Image Models. Advances in Neural Information Processing Systems (NeurIPS). [pdf]

  • (2023) G. Ortiz-Jimenez, M. Collier, A. Nawalgaria, A. D'Amour, J. Berent, R. Jenatton, E. Kokiopoulou. When does Privileged information Explain Away Label Noise? International Conference on Machine Learning (ICML). [pdf]

  • (2023) M. Collier*, R. Jenatton*, B. Mustafa, N. Houlsby, J. Berent, E. Kokiopoulou (*Contributed equally). Massively Scaling Heteroscedastic Classifiers. International Conference on Learning Representations (ICLR). [pdf]

  • (2022) J. Puigcerver, R. Jenatton, C. Riquelme, P. Awasthi, S. Bhojanapalli. On the Adversarial Robustness of Mixture of Experts. Advances in Neural Information Processing Systems (NeurIPS). [pdf]

  • (2022) B. Mustafa, C. Riquelme, J. Puigcerver, R. Jenatton, N. Houlsby. Multimodal Contrastive Learning with LIMoE: the Language-Image Mixture of Experts. Advances in Neural Information Processing Systems (NeurIPS). [pdf] [blogpost]

  • (2022) M. Collier, B. Mustafa, E. Kokiopoulou, R. Jenatton, J. Berent. Transfer and Marginalize: Explaining Away Label Noise with Privileged Information. International Conference on Machine Learning (ICML). [pdf]

  • (2022) S. Ariafar, J. Gilmer, Z. Nado, J. Snoek, R. Jenatton, G. E. Dahl. Predicting the utility of search spaces for black-box optimization: a simple, budget-aware approach. International Conference on Artificial Intelligence and Statistics (AISTATS). [pdf]

  • (2021) C. Riquelme, J. Puigcerver, B. Mustafa, M. Neumann, R. Jenatton, A. Susano Pinto, D. Keysers, N. Houlsby. Advances in Neural Information Processing Systems (NeurIPS). [pdf] [blogpost] [code]

  • (2021) M. Collier, B. Mustafa, E. Kokiopoulou, R. Jenatton, J. Berent. Correlated Input-Dependent Label Noise in Large-Scale Image Classification. Conference on Computer Vision and Pattern Recognition (CVPR). [pdf]

  • (2021) M. Havasi, R. Jenatton, S. Fort, J.Z. Liu, J. Snoek, B. Lakshminarayanan, A.M. Dai, D. Tran. Training independent subnetworks for robust prediction. International Conference on Learning Representations (ICLR). [pdf]

  • (2021) V. Perrone, H. Shen, A. Zolic, I. Shcherbatyi, A. Ahmed, T. Bansal, M. Donini, F. Winkelmolen, R. Jenatton, J.B. Faddoul, B. Pogorzelska, M. Miladinovic, K. Kenthapadi, M. Seeger, C. Archambeau. Amazon SageMaker Automatic Model Tuning: Scalable Black-box Optimization. SIGKDD Conference on Knowledge Discovery and Data Mining [pdf]

  • (2020) F. Wenzel, J. Snoek, D. Tran, R. Jenatton. Hyperparameter Ensembles for Robustness and Uncertainty Quantification. Advances in Neural Information Processing Systems (NeurIPS). [pdf] [code]

  • (2020) F. Wenzel, K. Roth, B. S. Veeling, J. Świątkowski, L. Tran, J. Snoek, S. Mandt, T. Salimans, R. Jenatton, S. Nowozin. How Good is the Bayes Posterior in Deep Neural Networks Really? International Conference on Machine Learning (ICML). [pdf] [code]

  • (2020) J. Świątkowski, K. Roth, B. S. Veeling, L. Tran, J. V. Dillon, J. Snoek, S. Mandt, T. Salimans, R. Jenatton, S. Nowozin. The k-tied Normal Distribution: A Compact Parameterization of Gaussian Mean Field Posteriors in Bayesian Neural Networks. International Conference on Machine Learning (ICML). [pdf]

  • (2019) V. Perrone, H. Shen, M. Seeger, C. Archambeau, R. Jenatton. Learning search spaces for Bayesian optimization: Another view of hyperparameter transfer learning. Advances in Neural Information Processing Systems (NeurIPS). [arxiv]

  • (2018) V. Perrone, R. Jenatton, M. Seeger, C. Archambeau. Scalable Hyperparameter Transfer learning. Advances in Neural Information Processing Systems (NeurIPS). [pdf] [supp]

  • (2017) R. Jenatton, C. Archambeau, J. Gonzalez, M. Seeger. Bayesian Optimization with Tree-structured Dependencies. International Conference on Machine Learning (ICML). [pdf] [supp]

  • (2016) J. Huang, R. Jenatton, C. Archambeau. Online dual decomposition for performance and delivery-based distributed ad allocation. SIGKDD Conference on Knowledge Discovery and Data Mining. [pdf]

  • (2016) R. Jenatton, J. Huang, C. Archambeau. Adaptive Algorithms for Online Convex Optimization with Long-term Constraints. International Conference on Machine Learning (ICML). [pdf]

  • (2015) A. Freno, M. Saveski, R. Jenatton, C. Archambeau. One-Pass Ranking Models for Low-Latency Product Recommendations. SIGKDD Conference on Knowledge Discovery and Data Mining. [pdf]

  • (2013) F. Fogel, R. Jenatton, F. Bach, A. d'Aspremont. Convex Relaxations for Permutation Problems. Advances in Neural Information Processing Systems (NIPS). [pdf]

  • (2012) R. Jenatton*, N. Le Roux*, A. Bordes, G. Obozinski (*Contributed equally). A latent factor model for highly multi-relational data. Advances in Neural Information Processing Systems (NIPS). [pdf] [code]

  • (2010) J. Mairal*, R. Jenatton*, G. Obozinski, F. Bach (*Contributed equally). Network Flow Algorithms for Structured Sparsity. Advances in Neural Information Processing Systems (NIPS). [pdf] [code]

  • (2010) R. Jenatton*, J. Mairal*, G. Obozinski, F. Bach (*Contributed equally). Proximal Methods for Sparse Hierarchical Dictionary Learning. International Conference on Machine Learning (ICML). [pdf][code]

  • (2010) R. Jenatton, G. Obozinski, F. Bach. Structured Sparse Principal Component Analysis. International Conference on Artificial Intelligence and Statistics (AISTATS). [pdf] [code]

Book chapters:

  • (2012) F. Bach, R. Jenatton, J. Mairal and G. Obozinski. Structured sparsity through convex optimization. Statistical Science Volume 27, Number 4 (2012), 450-468. [Statistical Science] [pdf]

  • (2012) F. Bach, R. Jenatton, J. Mairal and G. Obozinski. Optimization with Sparsity-Inducing Penalties. Foundations and Trends in Machine Learning, 4(1):1-106, 2012. [FOT][pdf]

  • (2011) F. Bach, R. Jenatton, J. Mairal and G. Obozinski. Convex Optimization with Sparsity-Inducing Norms. In S. Sra, S. Nowozin, S. J. Wright., editors, Optimization for Machine Learning, MIT Press, 2011. [pdf]

Technical reports:

  • (2023) K. Wang, G. Ortiz-Jimenez, R. Jenatton, M. Collier, E. Kokiopoulou, P. Frossard. Pi-DUAL: Using Privileged Information to Distinguish Clean from Noisy Labels. Technical report, arXiv:2310.06600. [pdf]

  • (2020) P. Das, V. Perrone, N. Ivkin, T. Bansal, Z. Karnin, H. Shen, I. Shcherbatyi, Y. Elor, W. Wu, A. Zolic, T. Lienart, A. Tang, A. Ahmed, J.B. Faddoul, R. Jenatton, F. Winkelmolen, P. Gautier, L. Dirac, A. Perunicic, M. Miladinovic, G. Zappella, C. Archambeau, M. Seeger, B. Dutt, L. Rouesnel. Amazon SageMaker Autopilot: a white box AutoML solution at scale. Technical report, arXiv:2012.08483. [pdf]

  • (2020) M. Collier, B. Mustafa, E. Kokiopoulou, R. Jenatton, J. Berent. A Simple Probabilistic Method for Deep Classification under Input-Dependent Label Noise. Technical report, arXiv:2003.06778. [pdf]

  • (2020) L. Tran, B. S. Veeling, K. Roth, J. Swiatkowski, J. V. Dillon, J. Snoek, S. Mandt, T. Salimans, S. Nowozin, R. Jenatton. Hydra: Preserving Ensemble Diversity for Model Distillation. Technical report, arXiv:2001.04694. [pdf]

  • (2019) V. Perrone, I. Shcherbatyi, R. Jenatton, C. Archambeau, M. Seeger. Constrained Bayesian Optimization with Max-Value Entropy Search. Technical report, arXiv:1910.07003. [pdf]

  • (2016) R. Jenatton, J. Huang, C. Archambeau. Online optimization and regret guarantees for non-additive long-term constraints. Technical report, arXiv:1602.05394. [pdf]

  • (2015) R. Jenatton, J. Huang, C. Archambeau. Adaptive Algorithms for Online Convex Optimization with Long-term Constraints. Technical report, arXiv:1512.07422. [pdf]

  • (2014) M. Seibert, M. Kleinsteuber, R. Gribonval, R. Jenatton and F. Bach. On The Sample Complexity of Sparse Dictionary Learning. Technical report, arXiv:1403.5112, 2014. [pdf]

  • (2012) R. Jenatton, R. Gribonval and F. Bach. Local stability and robustness of sparse dictionary learning in the presence of noise. Technical report, HAL 00737152. [pdf]

Selected workshops/talks:

  • (2023) J. Urquhart Allingham, F. Wenzel, Z. Mariet, B. Mustafa, J. Puigcerver, N. Houlsby, G. Jerfel, V. Fortuin, B. Lakshminarayanan, J. Snoek, D. Tran, C. Riquelme, R. Jenatton. Sparse MoEs meet Efficient Ensembles. Amazon StatML Workshop. [slides]

  • (2020) F. Wenzel, J. Snoek, D. Tran, R. Jenatton. Hyperparameter Ensembles for Robustness and Uncertainty Quantification. auto-ml seminars. [website] [slides]

  • (2019) J. Świątkowski, K. Roth, B. S. Veeling, L. Tran, J. V. Dillon, J. Snoek, S. Mandt, T. Salimans, R. Jenatton, S. Nowozin. The k-tied Normal Distribution: A Compact Parameterization of Gaussian Mean Field Posteriors in Bayesian Neural Networks. 2nd Symposium on Advances in Approximate Bayesian Inference, 2019 (best student award) [pdf]

  • (2018) L. Valkov, R. Jenatton, F. Winkelmolen, C. Archambeau. A simple transfer-learning extension of Hyperband. NeurIPS Workshop on Meta-Learning (MetaLearn 2018). [pdf] [supp]

  • (2017) V. Perrone, R. Jenatton, M. Seeger, C. Archambeau. Multiple Adaptive Bayesian Linear Regression for Scalable Bayesian Optimization with Warm Start. NIPS Workshop on Meta-Learning (MetaLearn 2017). [pdf]

  • (2015) Sparse and spurious: dictionary learning with noise and outliers. Optimization and Big Data 2015, Edinburgh. [slides]

  • (2011) R. Jenatton, R. Gribonval, and F. Bach. Local Analysis of Sparse Coding in the Presence of Noise. NIPS Workshop on Sparse Representation and Low-rank Approximation. [video]

  • (2011) J. Mairal, R. Jenatton, G. Obozinski and F. Bach. Learning Hierarchical and Topographic Dictionaries with Structured Sparsity. In proceeding of the SPIE conference on wavelets and sparsity XIV, 2011. [pdf]

  • (2011) R. Jenatton, A. Gramfort, V. Michel, G. Obozinski, F. Bach and B. Thirion. Multi-scale Mining of fMRI Data with Hierarchical Structured Sparsity.International Workshop on Pattern Recognition in Neuroimaging (PRNI). [ieee pdf]

  • (2010) G. Varoquaux, R. Jenatton, A. Gramfort, G. Obozinski, B. Thirion and F. Bach. Sparse Structured Dictionary Learning for Brain Resting-State Activity Modeling. NIPS Workshop on Practical Applications of Sparse Modeling: Open Issues and New Directions.

  • (2009) R. Jenatton, J.-Y. Audibert and F. Bach. Active Set Algorithm for Structured Sparsity-Inducing Norms. OPT 2009: 2nd NIPS Workshop on Optimization for Machine Learning. [pdf] [slide] [video]

Thesis:

  • (2011) Structured Sparsity-Inducing Norms: Statistical and Algorithmic Properties with Applications to Neuroimaging. Ph.D thesis. Ecole Normale Supérieure de Cachan. 2011. [pdf] [slides of the defense]

Awards:

Winner of the best 2012 Applied Mathematics PhD thesis prize, Fondation Hadamard 2012 [more details]

Runner-up for the best 2012 Machine Learning PhD thesis, Association Française pour l’Intelligence Artificielle 2012 [more details]