GO Loss: A Gaussian Distribution-Based Orthogonal Decomposition Loss for Classification

We present a novel loss function, namely, GO loss, for classification. Most of the existing methods, such as center loss and contrastive loss, dynamically determine the convergence direction of the sample features during the training process. By contrast, GO loss decomposes the convergence direction...

Full description

Saved in:
Bibliographic Details
Main Authors: Mengxin Liu, Wenyuan Tao, Xiao Zhang, Yi Chen, Jie Li, Chung-Ming Own
Format: Article
Language:English
Published: Wiley 2019-01-01
Series:Complexity
Online Access:http://dx.doi.org/10.1155/2019/9206053
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832563841687355392
author Mengxin Liu
Wenyuan Tao
Xiao Zhang
Yi Chen
Jie Li
Chung-Ming Own
author_facet Mengxin Liu
Wenyuan Tao
Xiao Zhang
Yi Chen
Jie Li
Chung-Ming Own
author_sort Mengxin Liu
collection DOAJ
description We present a novel loss function, namely, GO loss, for classification. Most of the existing methods, such as center loss and contrastive loss, dynamically determine the convergence direction of the sample features during the training process. By contrast, GO loss decomposes the convergence direction into two mutually orthogonal components, namely, tangential and radial directions, and conducts optimization on them separately. The two components theoretically affect the interclass separation and the intraclass compactness of the distribution of the sample features, respectively. Thus, separately minimizing losses on them can avoid the effects of their optimization. Accordingly, a stable convergence center can be obtained for each of them. Moreover, we assume that the two components follow Gaussian distribution, which is proved as an effective way to accurately model training features for improving the classification effects. Experiments on multiple classification benchmarks, such as MNIST, CIFAR, and ImageNet, demonstrate the effectiveness of GO loss.
format Article
id doaj-art-4d161328c26e4150bea5edb06580768f
institution Kabale University
issn 1076-2787
1099-0526
language English
publishDate 2019-01-01
publisher Wiley
record_format Article
series Complexity
spelling doaj-art-4d161328c26e4150bea5edb06580768f2025-02-03T01:12:27ZengWileyComplexity1076-27871099-05262019-01-01201910.1155/2019/92060539206053GO Loss: A Gaussian Distribution-Based Orthogonal Decomposition Loss for ClassificationMengxin Liu0Wenyuan Tao1Xiao Zhang2Yi Chen3Jie Li4Chung-Ming Own5College of Intelligence and Computing, Tianjin University, Tianjin 300072, ChinaCollege of Intelligence and Computing, Tianjin University, Tianjin 300072, ChinaDepartment of Electronic Engineering, The Chinese University of Hong Kong, Hong Kong SAR 999077, ChinaBeijing Key Laboratory of Big Data Technology for Food Safety, Beijing Technology and Business University, Beijing 100048, ChinaCollege of Intelligence and Computing, Tianjin University, Tianjin 300072, ChinaCollege of Intelligence and Computing, Tianjin University, Tianjin 300072, ChinaWe present a novel loss function, namely, GO loss, for classification. Most of the existing methods, such as center loss and contrastive loss, dynamically determine the convergence direction of the sample features during the training process. By contrast, GO loss decomposes the convergence direction into two mutually orthogonal components, namely, tangential and radial directions, and conducts optimization on them separately. The two components theoretically affect the interclass separation and the intraclass compactness of the distribution of the sample features, respectively. Thus, separately minimizing losses on them can avoid the effects of their optimization. Accordingly, a stable convergence center can be obtained for each of them. Moreover, we assume that the two components follow Gaussian distribution, which is proved as an effective way to accurately model training features for improving the classification effects. Experiments on multiple classification benchmarks, such as MNIST, CIFAR, and ImageNet, demonstrate the effectiveness of GO loss.http://dx.doi.org/10.1155/2019/9206053
spellingShingle Mengxin Liu
Wenyuan Tao
Xiao Zhang
Yi Chen
Jie Li
Chung-Ming Own
GO Loss: A Gaussian Distribution-Based Orthogonal Decomposition Loss for Classification
Complexity
title GO Loss: A Gaussian Distribution-Based Orthogonal Decomposition Loss for Classification
title_full GO Loss: A Gaussian Distribution-Based Orthogonal Decomposition Loss for Classification
title_fullStr GO Loss: A Gaussian Distribution-Based Orthogonal Decomposition Loss for Classification
title_full_unstemmed GO Loss: A Gaussian Distribution-Based Orthogonal Decomposition Loss for Classification
title_short GO Loss: A Gaussian Distribution-Based Orthogonal Decomposition Loss for Classification
title_sort go loss a gaussian distribution based orthogonal decomposition loss for classification
url http://dx.doi.org/10.1155/2019/9206053
work_keys_str_mv AT mengxinliu golossagaussiandistributionbasedorthogonaldecompositionlossforclassification
AT wenyuantao golossagaussiandistributionbasedorthogonaldecompositionlossforclassification
AT xiaozhang golossagaussiandistributionbasedorthogonaldecompositionlossforclassification
AT yichen golossagaussiandistributionbasedorthogonaldecompositionlossforclassification
AT jieli golossagaussiandistributionbasedorthogonaldecompositionlossforclassification
AT chungmingown golossagaussiandistributionbasedorthogonaldecompositionlossforclassification