altReboot
  • Startup
  • Growth Hacking
  • Marketing
  • Automation
  • Blockchain Tech
  • Artificial Intelligence
  • Contact
    • Write For Us
No Result
View All Result
  • Startup
  • Growth Hacking
  • Marketing
  • Automation
  • Blockchain Tech
  • Artificial Intelligence
  • Contact
    • Write For Us
No Result
View All Result
altReboot
No Result
View All Result
Home Artificial Intelligence

Military artificial intelligence can be easily and dangerously fooled

Guest Author by Guest Author
October 21, 2019
in Artificial Intelligence
Military artificial intelligence can be easily and dangerously fooled
1
SHARES
4
VIEWS
Share on FacebookShare on Twitter

This post originally appeared on MIT Technology Review

Last March, Chinese researchers announced an ingenious and potentially devastating attack against one of America’s most prized technological assets—a Tesla electric car.

The team, from the security lab of the Chinese tech giant Tencent, demonstrated several ways to fool the AI algorithms on Tesla’s car. By subtly altering the data fed to the car’s sensors, the researchers were able to bamboozle and bewilder the artificial intelligence that runs the vehicle.

Related articles

Public policies in the age of digital disruption

AIs that read sentences can also spot virus mutations

January 14, 2021
These five AI developments will shape 2021 and beyond

These five AI developments will shape 2021 and beyond

January 14, 2021

In one case, a TV screen contained a hidden pattern that tricked the windshield wipers into activating. In another, lane markings on the road were ever-so-slightly modified to confuse the autonomous driving system so that it drove over them and into the lane for oncoming traffic.

Tesla’s algorithms are normally brilliant at spotting drops of rain on a windshield or following the lines on the road, but they work in a way that’s fundamentally different from human perception. That makes such “deep learning” algorithms, which are rapidly sweeping through different industries for applications such as facial recognition and cancer diagnosis, surprisingly easy to fool if you find their weak points.

Leading a Tesla astray might not seem like a strategic threat to the United States. But what if similar techniques were used to fool attack drones, or software that analyzes satellite images, into seeing things that aren’t there—or not seeing things that are?

Artificial intelligence-gathering

Around the world, AI is already seen as the next big military advantage.

Early this year, the US announced a grand strategy for harnessing artificial intelligence in many areas of the military, including intelligence analysis, decision-making, vehicle autonomy, logistics, and weaponry. The Department of Defense’s proposed $718 billion budget for 2020 allocates $927 million for AI and machine learning. Existing projects include the rather mundane (testing whether AI can predict when tanks and trucks need maintenance) as well as things on the leading edge of weapons technology (swarms of drones).

The Pentagon’s AI push is partly driven by fear of the way rivals might use the technology. Last year Jim Mattis, then the secretary of defense, sent a memo to President Donald Trump warning that the US is already falling behind when it comes to AI. His worry is understandable.

Sign up for The Download — your daily dose of what’s up in emerging technology

In July 2017, China articulated its AI strategy, declaring that “the world’s major developed countries are taking the development of AI as a major strategy to enhance national competitiveness and protect national security.” And a few months later, Vladimir Putin of Russia ominously declared: “Whoever becomes the leader in [the AI] sphere will become the ruler of the world.”

The ambition to build the smartest, and deadliest, weapons is understandable, but as the Tesla hack shows, an enemy that knows how an AI algorithm works could render it useless or even turn it against its owners. The secret to winning the AI wars might rest not in making the most impressive weapons but in mastering the disquieting treachery of the software.

Battle bots

On a bright and sunny day last summer in Washington, DC, Michael Kanaan was sitting in the Pentagon’s cafeteria, eating a sandwich and marveling over a powerful new set of machine–learning algorithms.

A few weeks earlier, Kanaan had watched a video game in which five AI algorithms worked together to very nearly outmaneuver, outgun, and outwit five humans in a contest that involved controlling forces, encampments, and resources across a complex, sprawling battlefield. The brow beneath Kanaan’s cropped blond hair was furrowed as he described the action, though. It was one of the most impressive demonstrations of AI strategy he’d ever seen, an unexpected development akin to AI advances in chess, Atari, and other games.

The war game had taken place within Dota 2, a popular sci-fi video game that is incredibly challenging for computers. Teams must defend their territory while attacking their opponents’ encampments in an environment that is more complex and deceptive than any board game. Players can see only a small part of the whole picture, and it can take about half an hour to determine if a strategy is a winning one.

The AI combatants were developed not by the military but by OpenAI, a company created by Silicon Valley bigwigs including Elon Musk and Sam Altman to do fundamental AI research. The company’s algorithmic warriors, known as the OpenAI Five, worked out their own winning strategies through relentless practice, and by responding with moves that proved most advantageous.

AI-guided missiles could be blinded by adversarial data, and perhaps even steered back toward friendly targets.

It is exactly the type of software that intrigues Kanaan, one of the people tasked with using artificial intelligence to modernize the US military. To him, it shows what the military stands to gain by enlisting the help of the world’s best AI researchers. But whether they are willing is increasingly in question.

Kanaan was the Air Force lead on Project Maven, a military initiative aimed at using AI to automate the identification of objects in aerial imagery. Google was a contractor on Maven, and when other Google employees found that out, in 2018, the company decided to abandon the project. It subsequently devised an AI code of conduct saying Google would not use its AI to develop “weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.”

Workers at some other big tech companies followed by demanding that their employers eschew military contracts. Many prominent AI researchers have backed an effort to initiate a global ban on developing fully autonomous weapons.

To Kanaan, however, it would be a big problem if the military couldn’t work with researchers like those who developed the OpenAI Five. Even more disturbing is the prospect of an adversary gaining access to such cutting-edge technology. “The code is just out there for anyone to use,” he said. He added: “war is far more complex than some video game.”

Five algorithms work together to outwit five humans in the battlefield-based video game Dota 2.

courTesy image

The AI surge

Kanaan is generally very bullish about AI, partly because he knows firsthand how useful it stands to be for troops. Six years ago, as an Air Force intelligence officer in Afghanistan, he was responsible for deploying a new kind of intelligence-gathering tool: a hyperspectral imager. The instrument can spot objects that are normally hidden from view, like tanks draped in camouflage or emissions from an improvised bomb-making factory.  Kanaan says the system helped US troops remove many thousands of pounds of explosives from the battlefield. Even so, it was often impractical for analysts to process the vast amounts of data collected by the imager. “We spent too much time looking at the data and not enough time making decisions,” he says. “Sometimes it took so long that you wondered if you could’ve saved more lives.”

A solution could lie in a breakthrough in computer vision by a team led by Geoffrey Hinton at the University of Toronto. It showed that an algorithm inspired by a many-layered neural network could recognize objects in images with unprecedented skill when given enough data and computer power.

Training a neural network involves feeding in data, like the pixels in an image, and continuously altering the connections in the network, using mathematical techniques, so that the output gets closer to a particular outcome, like identifying the object in the image. Over time, these deep-learning networks learn to recognize the patterns of pixels that make up houses or people. Advances in deep learning have sparked the current AI boom; the technology underpins Tesla’s autonomous systems and OpenAI’s algorithms.

Kanaan immediately recognized the potential of deep learning for processing the various types of images and sensor data that are essential to military operations. He and others in the Air Force soon began lobbying their superiors to invest in the technology. Their efforts have contributed to the Pentagon’s big AI push.

But shortly after deep learning burst onto the scene, researchers found that the very properties that make it so powerful are also an Achilles’ heel.

Just as it’s possible to calculate how to tweak a network’s parameters so that it classifies an object correctly, it is possible to calculate how minimal changes to the input image can cause the network to misclassify it. In such “adversarial examples,” just a few pixels in the image are altered, leaving it looking just the same to a person but very different to an AI algorithm. The problem can arise anywhere deep learning might be used—for example, in guiding autonomous vehicles, planning missions, or detecting network intrusions.

Amid the buildup in military uses of AI, these mysterious vulnerabilities in the software have been getting far less attention.

Moving targets

One remarkable object serves to illustrate the power of adversarial machine learning. It’s a model turtle.

To you or me it looks normal, but to a drone or a robot running a particular deep-learning vision algorithm, it seems to be … a rifle. In fact, at one point the unique pattern of markings on the turtle’s shell could be recrafted so that an AI vision system made available through Google’s cloud would mistake it for just about anything. (Google has since updated the algorithm so that it isn’t fooled.)

The turtle was created not by some nation-state adversary, but by four guys at MIT. One of them is Anish Athalye, a lanky and very polite young man who works on computer security in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). In a video on Athalye’s laptop of the turtles being tested (some of the models were stolen at a conference, he says), it is rotated through 360 degrees and flipped upside down. The algorithm detects the same thing over and over: “rifle,” “rifle,” “rifle.”

The earliest adversarial examples were brittle and prone to failure, but Athalye and his friends believed they could design a version robust enough to work on a 3D-printed object. This involved modeling a 3D rendering of objects and developing an algorithm to create the turtle, an adversarial example that would work at different angles and distances. Put more simply, they developed an algorithm to create something that would reliably fool a machine-learning model.

The military applications are obvious. Using adversarial algorithmic camouflage, tanks or planes might hide from AI-equipped satellites and drones. AI-guided missiles could be blinded by adversarial data, and perhaps even steered back toward friendly targets. Information fed into intelligence algorithms might be poisoned to disguise a terrorist threat or set a trap for troops in the real world.

Athalye is surprised by how little concern over adversarial machine learning he has encountered. “I’ve talked to a bunch of people in industry, and I asked them if they are worried about adversarial examples,” he says. “The answer is, almost across the board, no.”

Fortunately, the Pentagon is starting to take notice. This August, the Defense Advanced Research Projects Agency (DARPA) announced several big AI research projects. Among them is GARD, a program focused on adversarial machine learning. Hava Siegelmann, a professor at the University of Massachusetts, Amherst, and the program manager for GARD, says these attacks could be devastating in military situations because people cannot identify them. “It’s like we’re blind,” she says. “That’s what makes it really very dangerous.”

The challenges presented by adversarial machine learning also explain why the Pentagon is so keen to work with companies like Google and Amazon as well as academic institutions like MIT. The technology is evolving fast, and the latest advances are taking hold in labs run by Silicon Valley companies and top universities, not conventional defense contractors.

Crucially, they’re also happening outside the US, particularly in China. “I do think that a different world is coming,” says Kanaan, the Air Force AI expert. “And it’s one we have to combat with AI.”

The backlash against military use of AI is understandable, but it may miss the bigger picture. Even as people worry about intelligent killer robots, perhaps a bigger near-term risk is an algorithmic fog of war—one that even the smartest machines cannot peer through.

 

Will Knight was until recently senior editor for AI at MIT Technology Review, and now works at Wired.

ShareTweet

Related Posts

Public policies in the age of digital disruption

AIs that read sentences can also spot virus mutations

by Will Heaven
January 14, 2021
0

Galileo once observed that nature is written in math. Biology might be written in words. Natural language processing (NLP) algorithms...

These five AI developments will shape 2021 and beyond

These five AI developments will shape 2021 and beyond

by Jason Sparapani
January 14, 2021
0

The year 2020 was profoundly challenging for citizens, companies, and governments around the world. As covid-19 spread, requiring far-reaching health...

Public policies in the age of digital disruption

Five ways to make AI a greater force for good in 2021

by Karen Hao
January 8, 2021
0

A year ago, none the wiser about what 2020 would bring, I reflected on the pivotal moment that the AI...

Public policies in the age of digital disruption

What Buddhism can do for AI ethics

by Amy Nordrum
January 6, 2021
0

The explosive growth of artificial intelligence has fostered hope that it will help us solve many of the world’s most...

Public policies in the age of digital disruption

We tested a tool to confuse Google’s ad network. It works and you should use it.

by Konstantin Kakaes
January 6, 2021
0

We’ve all been there by now: surfing the web and bumping into ads with an uncanny flavor. How did they...

Load More
  • Trending
  • Comments
  • Latest
7 Advanced SEO Strategies I’m Trying to Implement Before 2020

7 Advanced SEO Strategies I’m Trying to Implement Before 2020

September 10, 2019
What Do Successful Sales Look Like for the Rest of 2020?

13 Expert Tips to Increase Online Conversions in 2020

September 26, 2020
Creating SEO-friendly how-to content

Creating SEO-friendly how-to content

October 24, 2019
8 Simple Steps to Use Your Book to Grow Your Business

8 Simple Steps to Use Your Book to Grow Your Business

September 10, 2019
A Beginner’s Guide to Facebook Insights

A Beginner’s Guide to Facebook Insights

0

Which Social Media Sites Really Matter and Why

0
The 12 Ironclad Rules for Issuing Press Releases

The 12 Ironclad Rules for Issuing Press Releases

0
How to Get Started Building Links for SEO

How to Get Started Building Links for SEO

0
The Buddha and the Business

How to Build a Better Brand Through Content Curation

January 17, 2021
How to Make an App When You Can't Code (a Step-by-Step Guide)

Lessons Learned from Launching a Skincare Brand in a Pandemic

January 15, 2021
How to Make an App When You Can't Code (a Step-by-Step Guide)

Starting a Business Post-COVID: Your Three-Year Plan

January 15, 2021
19 Advanced SEO Techniques to Double Your Search Traffic

19 Advanced SEO Techniques to Double Your Search Traffic

January 15, 2021
altReboot




altREBOOT is committed to sharing the game changing advancements that are revolutionizing how you do business. From startup to goliath, innovations in technology are changing the face of the business landscape. We are committed to exploring these and how to apply them to your business at any stage of development.





Categories

  • Artificial Intelligence
  • Blockchain Tech
  • Growth Hacking
  • Marketing
  • Startup
  • Uncategorized

Tags

blockchain branding guest post marketing mobile apps
  • Home
  • Topics
  • Write For Us
  • Privacy Policy
  • Contact

Powered By Treehouse 51

No Result
View All Result
  • Startup
  • Growth Hacking
  • Marketing
  • Automation
  • Blockchain Tech
  • Artificial Intelligence
  • Contact
    • Write For Us