The ICML-21 Workshop on Information-Theoretic Methods for Rigorous, Responsible, and Reliable Machine Learning (ITR3 @ ICML-21) brings together researchers and practitioners in the machine learning and information theory communities to foster the development of theory that provides rigorous performance guarantees for ML models, guides the responsible deployment of ML in applications of social consequence, and enables the design of reliable ML systems in large-scale, distributed environments.
Recent advances in privacy, fairness, and generalization suggest that information theory will play a pivotal role in the next decade of ML applications. Information-theoretic methods can sharpen generalization bounds for deep learning, provide rigorous guarantees for compression of neural networks, promote fairness and privacy in ML training and deployment, improve communication efficiency in distributed training, and shed light on the limits of learning from noisy data. This workshop will highlight these new, socially-critical research areas, and it will allow ICML attendees to learn more about the information-theoretic techniques behind these recent advances.
The workshop will involve a series of invited talks, interactive Q&A sessions, and panels, each with participants of diverse backgrounds. We also welcome submissions of novel work to be presented during virtual poster sessions. Please see the call for papers for details. Follow us on twitter for updates: https://twitter.com/ITR3_workshop and reach out to us via email with any questions/suggestions: [email protected]. Looking forward to meeting you in July!