Some Optimal Results in Robustness Classification

Image of code

Department of Mathematical Sciences

Location: North Building, Room 316

Speaker: Jie Shen, Computer Science, Stevens Institute of Technology

Refreshments will be served at 4:00 PM.

ABSTRACT

Learning linear classifiers (i.e. halfspaces) is one of the fundamental problems in machine learning dating back to the 1950s. In the presence of benign label noise such as random classification noise, the problem is well understood. However, when the data are corrupted by more realistic noise, even establishing polynomial-time learnability can be nontrivial. In this talk, I will introduce our recent work on learning with malicious noise (a.k.a. data poisoning attack) where an adversary may inspect the learning algorithm and may inject malicious data. We present the first sample-optimal learning algorithm that achieves information-theoretic noise tolerance under log-concave distributional assumptions. We further show that when clean data are separable with a small margin, there exists a linear-time learning algorithm with constant noise tolerance.

BIOGRAPHY

Portrait of Jie Shen

Jie Shen is an Assistant Professor of Computer Science at Stevens. He received his PhD from Rutgers University under the direction of Ping Li and Panjal Awasthi. Professor Shen's research interests include theoretical and practical aspects of machine learning.


A campus map is available at https://tour.stevens.edu. Additional information is available at https://web.stevens.edu/algebraic/.

Zoom Link:

https://stevens.zoom.us/j/93228680142 (Password: ACC)