Journal Home Online First Current Issue Archive For Authors Journal Information 中文版

Frontiers of Information Technology & Electronic Engineering >> 2020, Volume 21, Issue 4 doi: 10.1631/FITEE.1800566

Anefficient parallel and distributed solution to nonconvex penalized linear SVMs

1. College of Computer, National University of Defense Technology, Changsha 410073, China
2. School of Computer Science, Fudan University, Shanghai 201203, China
3. Shanghai Key Laboratory of Data Science, Shanghai 201203, China

Available online: 2020-05-13

Next Previous

Abstract

Support vector machines (SVMs) have been recognized as a powerful tool to perform linear classification. When combined with the sparsity-inducing nonconvex penalty, SVMs can perform classification and variable selection simultaneously. However, the nonconvex penalized SVMs in general cannot be solved globally and efficiently due to their nondifferentiability, nonconvexity, and nonsmoothness. Existing solutions to the nonconvex penalized SVMs typically solve this problem in a serial fashion, which are unable to fully use the parallel computing power of modern multi-core machines. On the other hand, the fact that many real-world data are stored in a distributed manner urgently calls for a parallel and distributed solution to the nonconvex penalized SVMs. To circumvent this challenge, we propose an efficient alternating direction method of multipliers (ADMM) based algorithm that solves the nonconvex penalized SVMs in a parallel and distributed way. We design many useful techniques to decrease the computation and synchronization cost of the proposed parallel algorithm. The time complexity analysis demonstrates the low time complexity of the proposed parallel algorithm. Moreover, the convergence of the parallel algorithm is guaranteed. Experimental evaluations on four LIBSVM benchmark datasets demonstrate the efficiency of the proposed parallel algorithm.

Related Research