Stochastic coordinate descent for 01 loss and its sensitivity to adversarial attacks

Document Type

Conference Proceeding

Publication Date

12-1-2019

Abstract

The 01 loss while hard to optimize is least sensitive to outliers compared to its continuous differentiable counterparts, namely hinge and logistic loss. Recently the 01 loss has been shown to be most robust compared to surrogate losses against corrupted labels which can be interpreted as adversarial attacks. Here we propose a stochastic coordinate descent heuristic for linear 01 loss classification. We implement and study our heuristic on real datasets from the UCI machine learning archive and find our method to be comparable to the support vector machine in accuracy and tractable in training time. We conjecture that the 01 loss may be harder to attack in a black box setting due to its non-continuity and infinite solution space. We train our linear classifier in a one-vs-one multi-class strategy on CIFAR10 and STL10 image benchmark datasets. In both cases we find our classifier to have the same accuracy as the linear support vector machine but more resilient to black box attacks. On CIFAR10 the linear support vector machine has 0% on adversarial examples while the 01 loss classifier hovers about 10%. On STL10 the linear support vector machine has 0% accuracy whereas 01 loss is at 10%. Our work here suggests that 01 loss may be more resilient to adversarial attacks than the hinge loss and further work is required.

Identifier

85080954777 (Scopus)

ISBN

[9781728145495]

Publication Title

Proceedings 18th IEEE International Conference on Machine Learning and Applications Icmla 2019

External Full Text Location

https://doi.org/10.1109/ICMLA.2019.00056

First Page

299

Last Page

304

This document is currently not available here.

Share

COinS