Home
This Title All WIREs
WIREs RSS Feed
How to cite this WIREs title:
WIREs Comp Stat

Adversarial machine learning for cybersecurity and computer vision: Current developments and challenges

Full article on Wiley Online Library:   HTML PDF

Can't access this content? Tell your librarian.

Abstract We provide a comprehensive overview of adversarial machine learning focusing on two application domains, that is, cybersecurity and computer vision. Research in adversarial machine learning addresses a significant threat to the wide application of machine learning techniques—they are vulnerable to carefully crafted attacks from malicious adversaries. For example, deep neural networks fail to correctly classify adversarial images, which are generated by adding imperceptible perturbations to clean images. We first discuss three main categories of attacks against machine learning techniques—poisoning attacks, evasion attacks, and privacy attacks. Then the corresponding defense approaches are introduced along with the weakness and limitations of the existing defense approaches. We notice adversarial samples in cybersecurity and computer vision are fundamentally different. While adversarial samples in cybersecurity often have different properties/distributions compared with training data, adversarial images in computer vision are created with minor input perturbations. This further complicates the development of robust learning techniques, because a robust learning technique must withstand different types of attacks. This article is categorized under: Statistical Learning and Exploratory Methods of the Data Sciences > Clustering and Classification Statistical Learning and Exploratory Methods of the Data Sciences > Deep Learning Statistical and Graphical Methods of Data Analysis > Robust Methods
A spam email sent on February 06, 2019, and received in Yahoo Spam folder
[ Normal View | Magnified View ]
A standard classification boundary (dashed line) versus a conservative boundary (solid line)
[ Normal View | Magnified View ]
Illustration of poisoning attack, evasion attack, and privacy attack
[ Normal View | Magnified View ]
Two adversarial samples. Left: A handwritten 4 is misclassified as 9; Right: A handwritten 9 is misclassified as 4
[ Normal View | Magnified View ]

Browse by Topic

Statistical and Graphical Methods of Data Analysis > Robust Methods
Statistical Learning and Exploratory Methods of the Data Sciences > Deep Learning
Statistical Learning and Exploratory Methods of the Data Sciences > Clustering and Classification

Access to this WIREs title is by subscription only.

Recommend to Your
Librarian Now!

The latest WIREs articles in your inbox

Sign Up for Article Alerts