Join us at our ICLR workshop on Friday, May 7 2021 (0800 hrs - 1500 hrs Eastern Daylight Time)

Time (EDT)

Time (PDT)

Time (CEST)

Event

0815

0515

1415

Organizers
Opening remarks

0830

0530

1430

Jeff Bigham - Carnegie Melon University
Accessibility and the ML paper

TBD

0900

0600

1500

David Ha - Google
Case studies for interactive demonstration of machine learning models on the web browser

While papers are the main means for the scientists to communicate results, both quantitative and qualitative ones, to the scientific community, the expectations in the machine learning community have moved above and beyond the paper format. Machine models are expected to be ultimately used by people, in devices, computers, and other applications. In recent years we have witnessed the popularity of works that are also published as a web article along with interactive demos, enabling the reader to interact with machine learning research models to experience the features and also the limitations of cutting edge methods. This comes with costs, as development and deployment of such interactive websites consume time and energy from the researcher's point of view. In particular, the audience may find flaws in the model by interacting with it in ways unintended by the authors who simply wish to report a score against a benchmark. In this talk, I will discuss my own experiences of developing these interactive web browser demos for my own research and also other interactive web browser demos in the literature as a series of case studies. By the end of the talk, the audience will be familiar with the different types of approaches and their tradeoffs used in the development of web demos for research, to be able to assess whether it is something they wish to do for their own projects.

0930

0630

1530

Authors
Spotlight talks
Curating Publications as Artefacts
I❤LA -- Compilable Markdown for Linear Algebra

0950

0650

1550

Virtual Coffee Break

1000

0700

1600

Hugo Larochelle - Google
An honest conversation on the ML conference paper

The conference paper is the dominating medium for scientific dissemination in the ML community. Yet, the ML community's favorite hobby is also quite possibly ... to complain about the ML conference paper! In this talk, I will attempt to piece out some potential reasons behind this situation of discontent. I will also lay out my own views on how we might be able to do better than the conference paper.

1030

0730

1630

Falaah Arif Khan - New York University
It’s funny because it’s true - confronting ML catechisms

Meet AI - It’s a very strange time in its life. There’s tremendous potential to do good, immense interest to build and widely deploy these systems and a very real impact (including irrevocable harm) associated with this technology. The landscape is rife with problems – incentive structures and gold-rush mentality in scholarship, celebrity culture and media hype, unhealthy extremes of techno-bashing and techno-optimism and the false dichotomy between “social problems” and “engineering problems”. Nuance and critical thinking are the most valuable, yet scarce commodities! A possible first step at self-correction could be for us – as practitioners and designers of these systems – to stop taking *ourselves* so seriously and instead direct this gravitas onto the *consequences* of our work (beyond citation counts and academic accolades). How, you ask? Using the marvelous world of comics! In this talk, I’ll present my thoughts (in comic form) on some of the pressing problems in the ML landscape and (attempt to) motivate artistic interventions as a possible solution to catechisms of the scientific method that we take too seriously and perhaps need to rethink.

1100

0800

1700

Evelyne Viegas - Microsoft Research
From competitions to coopetitions to drive open innovation

When data became a first class citizen in driving innovation, several competition platforms emerged defining data-driven challenges as a way to bring solutions to global or business problems. In contrast, Codalab was designed to create a collaborative ecosystem for conducting computational Machine Learning research in an efficient, open and reproducible manner. The goals of CodaLab were threefold 1) reduce duplication of effort; 2) enable reproducibility of the experiment with comparable baselines; 3) encourage the community to work collaboratively to solve grand challenges. The community was further developed as part of the Challenges in Machine Learning (CiML) workshop series which brings the broader community of challenge organisers and participants together on a yearly basis at NeurIPS. CiML provides an opportunity to share best practices across platforms, define higher impact challenges for research, education or innovation while fostering diversity in the community of participants and organisers to address global and local challenges.

1130

0830

1730

Authors
Spotlight talks
In defense of the paper
ML research communication via illustrated and interactive web articles
You Only Write Thrice

1150

0850

1750

Virtual Coffee Break

1200

0900

1800

Lilian Weng - OpenAI
Catch up with the field by writing a high-quality machine learning blog

In this talk, I will use my personal journey of how I got into the field of deep learning as a case study for how to write a high-quality machine learning blog. I will talk about the general process of writing about a new topic and common difficulties that I often run into. Then I will compare the style of routine academic publications versus more casual blogging and point out pros and cons.

1230

0930

1830

Terence Parr - University of San Fransisco
Ya gotta make it obvious

Half the human cortex is devoted to understanding our world visually and so it makes sense to leverage that processing power in order to understand, describe, and debug computational abstractions, such as machine learning models. The problem is that explaining our work visually often represents considerable extra effort, particularly if we want to employ animations. It's also the case that we all have the urge to impress rather than illuminate. Taken together, this can lead to papers, lectures, and classes that don't actually transmit ideas to others. We should value simple and clear expositions most of all, making the key ideas obvious, even if it requires extraordinary effort. We should not accept the status quo, and constantly ask ourselves if these are the best explanations and visualizations we can make. This short talk will demonstrate some state-of-the-art visualizations from explained.ai and describe their backstories.

1300

1000

1900

Devi Parikh, Evelyne Viegas, Falaah Arif Khan, Hugo Larochelle
Panel Discussion

1340

1040

1940

Organizers
Closing remarks

1355

1055

1955

Authors
Poster Session

gather.town link https://eventhosts.gather.town/app/wVzIccpD3mMEJCOR/rethinkingmlpapers (Requires an ICLR registration)