Smoothing gradient descent algorithm for the composite sparse optimization

Composite sparsity generalizes the standard sparsity that considers the sparsity on a linear transformation of the variables. In this paper, we study the composite sparse optimization problem consisting of minimizing the sum of a nondifferentiable loss function and the $ {\mathcal{\ell}_0} $ penalty...

Full description

Saved in:
Bibliographic Details
Main Authors: Wei Yang, Lili Pan, Jinhui Wan
Format: Article
Language:English
Published: AIMS Press 2024-11-01
Series:AIMS Mathematics
Subjects:
Online Access:https://www.aimspress.com/article/doi/10.3934/math.20241594
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Composite sparsity generalizes the standard sparsity that considers the sparsity on a linear transformation of the variables. In this paper, we study the composite sparse optimization problem consisting of minimizing the sum of a nondifferentiable loss function and the $ {\mathcal{\ell}_0} $ penalty term of a matrix times the coefficient vector. First, we consider an exact continuous relaxation problem with a capped-$ {\mathcal{\ell}_1} $ penalty that has the same optimal solution as the primal problem. Specifically, we propose the lifted stationary point of the relaxation problem and then establish the equivalence of the original and relaxation problems. Second, we propose a smoothing gradient descent (SGD) algorithm for the continuous relaxation problem, which solves the subproblem inexactly since the objective function is inseparable. We show that if the sequence generated by the SGD algorithm has an accumulation point, then it is a lifted stationary point. At last, we present several computational examples to illustrate the efficiency of the algorithm.
ISSN:2473-6988