ROOTs 2020: A survey on practical adversarial examples for malware classifiers – Daniel Park

Sanna/ November 18, 2020/ ROOTS

Machine learning based models have proven to be effective in a variety of problem spaces, especially in malware detection and classification. However, with the discovery of deep learning models’ vulnerability to adversarial perturbations, a new attack has been developed against these models. The first attacks based on adversarial example research focused on generating feature vectors, but more recent research shows it is possible to generate evasive malware samples.

In this talk, I will discuss several attacks that have been developed against machine learning based malware classifiers that leverage adversarial perturbations to develop an adversarial malware example. Adversarial malware examples differ from adversarial examples in the natural image domain in that they must retain the original malicious program logic in addition to evading detection or classification.

Adversarial machine learning has become increasingly popular and is seen outside of academic publications. For example, the 2020 DEFCON CTF Finals included an adversarial machine learning challenge. It is important to realize that adversarial examples are no longer just a threat against computer vision models and already affects the security of our systems.

 

Daniel Park is a Ph.D Candidate in the Computer Science Department at Rensselaer Polytechnic Institute. His research currently focuses on the intersection of computer security and machine learning, most recently focusing on the security of deep learning models. He is also interested in binary analysis techniques and participates in CTFs with RPISEC.

Share this Post