This week, I attended the 37th International Conference on Machine Learning (ICML 2020). It would have taken place in Vienna (Austria), but because of COVID-19, the organizers decided to organize the conference online. I applaud the organizers of ICML 2020: the conference website was well made, the digital experience was pretty good, and all speakers recorded excellent videos of their talks.
What attending ICML 2020 online was like
Attending a conference such as ICML online is very different from attending a conference in person.
Some of the advantages
For instance, attendance fees are considerably lower for online conferences, and it is much easier to accommodate more visitors and speakers. Busier people can also squeeze in a day of attendance more easily, and researchers can more effectively use their travel budgets (thus aiding poorer institutes also).
Furthermore, you can watch any talk that is interesting to you whenever you want because they are prerecorded. If you attend a large conference in person, you will notice that they are usually organized around parallel sessions. Usually, parallel sessions do allow you to go to most talks that interest to you, but not all – on the upside, parallel sessions make it more likely for you to see talks that are slightly outside of your area of expertise or interest, and this can be inspiring.
Another benefit of prerecorded talks is that you can pause the video, download the authors’ paper immediately after, scrub through the slides, and reflect more deeply on their results. This dramatically helps information retention.
Lastly, it is great to be able to cast a keynote speaker to your television and enjoy their talk from your sofa.
Some of the disadvantages
Online conferences do not achieve the same level of international community building. Everyone is watching videos from home, alone, after all. You do not meet your international colleagues; you are not as likely to make new contacts; there are no inspiring chats during coffee breaks, lunches, or dinners; the social event (if any!) is less of a communal experience; and you are less likely to ask or be asked a question.
Also, we should acknowledge that occasional traveling is an enjoyable part of the academic career, and online conferences eliminate this attractive portion of our work.
Finally, because you also work from home, it is much harder to focus solely on the conference: you have to actively try not to work on other tasks, and sit down with the mindset that you are attending a conference. To help with this, I decided together with a PhD student that we would have daily “conference coffees” to recommend talks to each other and to briefly discuss the ones that we liked best.
Should all future conferences now be online?
The COVID-19 pandemic clearly necessitated conference organizers to either cancel or move their conferences online this year. This problem was not restricted to ICML 2020, and many conferences were affected.
Longer term, however, I hope that online-only conferences do not become the norm. I would prefer a best-of-both-worlds approach: in person conference attendance can be combined with a strong digital presence and e.g. live streaming as well as recording. This would retain most of the advantages mentioned above, as well as mitigate most of the disadvantages.
Talks that I attended at ICML 2020
I most likely forgot to write some of the titles down, but here is a list of talks that I attended at ICML 2020:
- Keynote talks
- Lester Mackey, Doing Some Good with Machine Learning
- Brenna Argall, Human and Machine Learning for Assistive Autonomy
- Iordanis Kerenidis, Quantum Machine Learning: Prospects and Challenges
- Bandit optimization
- Andrey Kolobov, Sebastian Bubeck, Julian Zimmert, Online Learning for Active Cache Synchronization
- Rémy Degenne, Han Shao, Wouter Koolen, Structure Adaptive Algorithms for Stochastic Bandits
- Vidyashankar Sivakumar, Steven Wu, Arindam Banerjee, Structured Linear Contextual Bandits: A Sharp and Geometric Smoothed Analysis
- Dylan Foster, Alexander Rakhlin, Beyond UCB: Optimal and Efficient Contextual Bandits with Regression Oracles
- Wang, Zhou, So, A Nearly-Linear Time Algorithm for Exact Community Recovery in the Stochastic Block Model
- Brian Brubach, Darshan Chakrabarti, John Dickerson, Samir Khuller, Aravind Srinivasan, Leonidas Tsepenekas, A Pairwise Fair and Community-preserving Approach to k-Center Clustering
- Xinjie Fan, Yuguang Yue, Purnamrita Sarkar, Y.X. Rachel Wang, On hyperparameter tuning in general clustering problems
- Michal Moshkovitz, Sanjoy Dasgupta, Cyrus Rashtchian, Nave Frost, Explainable k-Means and k-Medians Clustering
- Filippo Maria Bianchi, Daniele Grattarola, Cesare Alippi, Spectral Clustering with Graph Neural Networks for Graph Pooling
- Baifeng Shi, Dinghuai Zhang, Qi Dai, Jingdong Wang, Zhanxing Zhu, Yadong Mu, Informative Dropout for Robust Representation Learning: A Shape-bias Perspective
- Colin Wei, Sham Kakade, Tengyu Ma, The Implicit and Explicit Regularization Effects of Dropout
- Alexander Shevchenko, Marco Mondelli, Landscape Connectivity and Dropout Stability of SGD Solutions for Over-parameterized Neural Networks
- Eran Malach, Gilad Yehudai, Shai Shalev-Schwartz, Ohad Shamir, Proving the Lottery Ticket Hypothesis: Pruning is All You Need
- Sampling methods
- Robert Salomone, Matias Quiroz, Robert Kohn, Mattias Villani, Minh-Ngoc Tran, Spectral Subsampling MCMC for Stationary Time Series
- James Wilson, Slava Borovitskiy, Alexander Terenin, Peter Mostowsky, Marc Deisenroth, Efficiently sampling functions from Gaussian process posteriors
- Yonatan Dukler, Quanquan Gu, Guido Montufar, Optimization Theory for ReLU Neural Networks Trained with Normalization Layers
- Jingzhao Zhang, Hongzhou Lin, Stefanie Jegelka, Suvrit Sra, Ali Jadbabaie, Complexity of Finding Stationary Points of Nonconvex Nonsmooth Functions
- Random matrices
- Mohamed El Amine Seddik, Cosme Louart, Mohamed Tamaazousti, Romain Couillet, Random Matrix Theory Proves that Deep Learning Representations of GAN-data Behave as Gaussian Mixtures
- Stochastic optimization
- Hadrien Hendrikx, Lin Xiao, Sebastian Bubeck, Francis Bach, Laurent Massoulie, Statistically Preconditioned Accelerated Gradient Method for Distributed Optimization
- Vien Van Mai, Mikael Johansson, Convergence of a Stochastic Gradient Method with Momentum for Non-Smooth Non-Convex Optimization
- Mido Assran, Mike Rabbat, On the Convergence of Nesterov’s Accelerated Gradient Method in Stochastic Settings
- Armin Eftekhari, Training Linear Neural Networks: Non-Local Convergence and Complexity Results
- Badr-Eddine Chérief-Abdellatif, Convergence Rates of Variational Inference in Sparse Deep Learning
- Yoel Drori, Ohad Shamir, The Complexity of Finding Stationary Points with Stochastic Gradient Descent
- Pan Xu, Quanquan Gu, A Finite-Time Analysis of Q-Learning with Neural Network Function Approximation
Visit the Conferences Category for similar posts on conferences.