ROOTs 2020: A survey on practical adversarial examples for malware classifiers – Daniel Park
Machine learning based models have proven to be effective in a variety of problem spaces, especially in malware detection and classification. However, with the discovery of deep learning models’ vulnerability to adversarial perturbations, a new attack has been developed against these models. The first attacks based on adversarial example research focused on generating feature vectors, but more recent research shows it is possible to generate evasive malware samples. In this talk, I will discuss several attacks that have been developed against machine learning based malware classifiers that leverage adversarial perturbations to develop an adversarial malware example. Adversarial malware examples differ from adversarial examples in the natural image domain in that they must retain the original malicious program logic in addition to evading detection or classification. Adversarial machine learning has become increasingly popular and is