This post originally appeared on MIT Technology Review
A year ago, none the wiser about what 2020 would bring, I reflected on the pivotal moment that the AI community was in. 2018 had seen a series of high-profile automated failures, like self-driving car crashes and discriminatory recruiting tools. In 2019, the field responded with more talk of AI ethics than ever before. But talk, I said, was not enough. We needed to take tangible actions. Two months later, the coronavirus shut down the world.
In our new socially-distanced, remote everything reality, these conversations about algorithmic harms suddenly came to a head. Systems that had been at the fringe, like HireVue’s face-scanning algorithms and workplace surveillance tools, were going mainstream. Others, like to monitor and evaluate students, were spinning up in real-time. In August, after a spectacular failure of the UK government to replace in-person exams with an algorithm for university admissions, hundreds of students gathered in London to chant, “Fuck the algorithm.” “This is becoming the battle cry of 2020,” tweeted AI accountability researcher Deb Raji, when a Stanford protestor yelled it again for a different debacle a few months later.
At the same time, there was indeed more action. In one major victory, Amazon, Microsoft, and IBM banned or suspended their sale of face recognition to law enforcement, after the killing of George Floyd spurred global protests against police brutality. It was the culmination of two years of fighting by researchers and civil rights activists to demonstrate the ineffective and discriminatory effects of the companies’ technologies. Another small yet notable change: for the first time ever, NeurIPS, one of the most prominent AI research conferences, required researchers to submit an ethics statement with their papers.
So here we are at the start of 2021, with more public awareness of and regulatory attention on AI’s influence than ever before. My New Year’s resolution: Let’s make it count. Here are five hopes that I have for AI in the coming year.
Reduce corporate influence in research
The tech giants have disproportionate control over the direction of AI research. This has shifted the direction of the AI field as a whole toward increasingly big data and big models. There are several consequences of singularly investing in this approach. It blows up the climate impact of AI advancements, locks out resource-constrained labs from participating in the field, and leads to lazier scientific inquiry by ignoring the range of other approaches. As Google’s ousting of Timnit Gebru revealed, tech giants will readily limit the field’s ability to investigate other consequences as well.
But much of corporate influence comes down to money and the lack of alternative funding. As I wrote last year in my profile of OpenAI, the lab initially sought to rely only on independent wealthy donors. The bet proved unsustainable, and four years later, it signed an investment deal with Microsoft. My hope is we’ll see more governments step into this void to provide non-defense-related funding options for researchers. It won’t be a perfect solution, but it’ll be a start. Governments are beholden to the public, not the bottom line.
Refocus on common sense understanding
The overwhelming attention on bigger and badder models has overshadowed one of the central goals of AI research: to create intelligent machines that don’t just pattern match but actually understand meaning. While corporate influence is a major contributor to this trend, there are other culprits as well. Research conferences and peer-review publications place a heavy emphasis on achieving “state-of-the-art” results. But state of the art is often poorly measured by tests that can be beaten with more data and larger models.
It’s not that large-scale models could never reach common sense understanding. That’s still an open question. But there are other avenues of research deserving of greater investment. Some experts have placed their bets on neurosymbolic AI, which combines deep learning with symbolic knowledge systems. Others are experimenting with more probabilistic techniques that use far less data, inspired by a human child’s ability to learn with very few examples.
In 2021, I hope the field will realign its incentives to prioritize comprehension over prediction. Not only could this lead to more technically robust systems, the improvements would have major social implications as well. The susceptibility of current deep-learning systems to being fooled, for example, undermines the safety of self-driving cars and poses dangerous possibilities for autonomous weapons. The inability of systems to distinguish between correlation and causation is also at the root of algorithmic discrimination.
Empower marginalized researchers
If algorithms codify the values and perspectives of their creators, a broad cross-section of humanity should be present at the table when they are developed. I saw no better evidence of this than in December of 2019, when I attended NeurIPS. That year, it had a record number of women and minority speakers and attendees, and I could feel it tangibly shift the tenor of the proceedings. There were more talks than ever grappling with AI’s influence on society.
At the time I lauded the community for its progress. But Google’s treatment of Gebru as one of the few prominent Black women in industry showed how far there still is to go. Diversity in numbers is meaningless if those individuals aren’t empowered to bring their lived experience into their work. I’m optimistic though that the tide is changing. The flashpoint sparked by Gebru’s firing turned into a critical moment of reflection for the industry. I hope this momentum continues and converts into long-lasting, systemic change.
Center the perspectives of impacted communities
There’s also another group to bring to the table. One of the most exciting trends from last year was the emergence of participatory machine learning. It’s a provocation to reinvent the process of AI development to include those who ultimately become subject to the algorithms.
In July, the first conference workshop dedicated to this approach collected a wide range of ideas about what that could look like. It included new governance procedures for soliciting community feedback; new model auditing methods for informing and engaging the public; and proposed redesigns of AI systems to give users more control of their settings.
My hope for 2021 is to see more of these ideas trialed and adopted in earnest. Facebook is already testing out a version of this with its external oversight board. If the company follows through with allowing the board to make binding changes to the platform’s content moderation policies, the governance structure could become a feedback mechanism worthy of emulation.
Codify guardrails into regulation
Thus far grassroots efforts have led the movement to mitigate algorithmic harms and hold tech giants accountable. But it will be up to national and international regulators to set up more permanent guardrails. The good news is lawmakers around the world have been watching and are in the midst drafting legislation. In the US, Congress members have already introduced bills to address facial recognition, AI bias, and deepfakes. Several of them also sent a letter to Google in December expressing their intent to continue pursuing this regulation.
So my last hope for 2021 is that we see the passing of some of these bills. It’s time we codify what we’ve learned over the past few years, and move away from the fiction of self-regulation.