This post originally appeared on MIT Technology Review
Regulators, rein us in: Tesla and SpaceX CEO Elon Musk has said advanced artificial intelligence development should be regulated, including AI created by his own companies. He tweeted the remark in response to an article published this week by MIT Technology Review about OpenAI (which Musk co-founded but has since left), describing how it has drifted from its initial purpose of developing AI safely and fairly to become secretive and preoccupied with raising money. When Musk was asked if he meant AI should be regulated by individual governments or on a global scale, for example by the UN, he replied: “Both.”
Timely: The European Union unveiled a plan today to regulate “high risk” AI systems today. New draft laws are expected to follow at the end of 2020. Last year 42 different countries signed up to a promise to take steps to regulate AI. However, the US and China currently seem to be prioritizing innovation and establishing supremacy in the field of AI over regulation and safety concerns.
Longstanding worries: This is far from the first time Musk has expressed concerns about the potential negative consequences of AI development. He’s previously described it as “our biggest existential threat” and “potentially more dangerous than nukes.” In 2018 he told Recode that he thought a government committee should spend a year or two “gaining insight about AI” then come up with regulations to ensure AI is developed and used safely.
To have more stories like this delivered directly to your inbox, sign up for our Webby-nominated AI newsletter The Algorithm. It’s free.