This post originally appeared on MIT Technology Review
Only two years ago, so I’m told, one of the hottest AI research conferences of the year was more giant party than academic exchange. In a fight for the best talent, companies handed out endless free swag and threw massive, blowout events, including one featuring Flo Rida, hosted by Intel. The attendees (mostly men in their early 20s and 30s), flush with huge salaries and the giddiness of being highly coveted, drank free booze and bumped the night away.
I never witnessed this version of NeurIPS, short for the Neural Information Processing Systems conference. I came for my first time last year, after the excess had reached its peak. Externally, the community was coming under increasing scrutiny as the upset of the 2016 US presidential election drove people to question the influence of algorithms in society. Internally, reports of sexual harrassment, anti-Semitism, racism, and ageism were also driving conference goers to question whether they should continue to attend.
So when I arrived in 2018, a diversity and inclusion committee had been appointed, and the long-standing abbreviation NIPS had been updated. Still, this year’s proceedings feel different from the last. The parties are smaller, the talks are more socially minded, and the conversations happening in between seem more aware of the ethical challenges that the field needs to address.
As the role of AI has expanded dramatically, along with the more troubling aspects of its impact, the community, it seems, has finally begun to reflect on its power and the responsibilities that come with it. As one attendee put it to me: “It feels like this community is growing up.”
This change manifested in some concrete ways. Many of the technical sessions were more focused on addressing real-world, human-centric challenges rather than theoretical ones. Entire poster tracks were centered on better methods for protecting user privacy, ensuring fairness, and reducing the amount of energy it can take to run and train state-of-the-art models. Day-long workshops, scheduled to happen today and tomorrow, have titles like “Tackling Climate Change with Machine Learning” and “Fairness in Machine Learning for Health.”
Additionally, many of the invited speakers directly addressed the social and ethical challenges facing the field—topics once dismissed as not core to the practice of machine learning. Their talks were also well received by attendees, signaling a new openness to engage with these issues. At the opening event, for example, cognitive psychologist and #metoo figurehead Celeste Kidd gave a rousing speech exhorting the tech industry to take responsibility for how its technologies shape people’s beliefs and debunking myths around sexual harassment. She received a standing ovation. In an opening talk at the Queer in AI symposium, Stanford researcher Ria Kalluri also challenged others to think more about how their machine-learning models could shift the power in society from those who have it to those who don’t. Her talk was widely circulated online.
Much of this isn’t coincidental. Through the work of the diversity and inclusion committee, the conference saw the most diverse participation in the its history. Close to half the main-stage speakers were women and a similar number minorities; 20% of the over 13,000 attendees were also women, up from 18% last year. There were seven community-organized groups for supporting minority researchers, which is a record. These included Black in AI, Queer in AI, and Disability in AI, and they held parallel proceedings in the same space as NeurIPS to facilitate mingling of people and ideas.
When we involve more people from diverse backgrounds in AI, Kidd told me, we naturally talk more about how AI is shaping society, for good or for bad. “They come from a less privileged place and are more acutely aware of things like bias and injustice and how technologies that were designed for a certain demographic may actually do harm to disadvantaged populations,” she said. Kalluri echoed the sentiment. The intentional efforts to diversify the community, she said, are forcing it to “confront the questions of how power works in this field.”
Despite the progress, however, many emphasized that the work is just getting started. Having 20% women is still appalling, and this year, as in past years, there continued to be Herculean challenges in securing visas for international researchers, particularly from Africa.
“Historically, this field has been pretty narrowed in on a particular demographic of the population, and the research that comes out reflects the values of those people,” says Katherine Heller, an assistant professor at Duke University and co-chair of the diversity committee. “What we want in the long run is a more inclusive place to shape what the future direction of AI is like. There’s still a far way to go.”
Yes, there’s still a long way to go. But on Monday, as people lined up to thank Kidd for her talk one by one, I let myself feel hopeful.