Proximal boosting: aggregating weak learners to minimize non-differentiable losses - Archive ouverte HAL Access content directly
Journal Articles Neurocomputing Year : 2023

Proximal boosting: aggregating weak learners to minimize non-differentiable losses

(1) , (1) , (1)
1

Abstract

Gradient boosting is a prediction method that iteratively combines weak learners to produce a complex and accurate model. From an optimization point of view, the learning procedure of gradient boosting mimics a gradient descent on a functional variable. This paper proposes to build upon the proximal point algorithm, when the empirical risk to minimize is not differentiable, in order to introduce a novel boosting approach, called proximal boosting. It comes with a companion algorithm inspired by [1] and called residual proximal boosting, which is aimed at better controlling the approximation error. Theoretical convergence is proved for these two procedures under different hypotheses on the empirical risk and advantages of leveraging proximal methods for boosting are illustrated by numerical experiments on simulated and real-world data. In particular, we exhibit a favorable comparison over gradient boosting regarding convergence rate and prediction accuracy.
Fichier principal
Vignette du fichier
paper_flat.pdf (1.21 Mo) Télécharger le fichier
Origin : Files produced by the author(s)

Dates and versions

hal-01853244 , version 1 (02-08-2018)
hal-01853244 , version 2 (22-01-2020)
hal-01853244 , version 3 (27-07-2021)
hal-01853244 , version 4 (29-11-2022)

Identifiers

Cite

Erwan Fouillen, Claire Boyer, Maxime Sangnier. Proximal boosting: aggregating weak learners to minimize non-differentiable losses. Neurocomputing, 2023, ⟨10.1016/j.neucom.2022.11.065⟩. ⟨hal-01853244v4⟩
312 View
282 Download

Altmetric

Share

Gmail Facebook Twitter LinkedIn More