Authors
Abhishake Rastogi, Indian Institute of Technology - Delhi, India
Abstract
In learning theory, the convergence issues of the regression problem are investigated with the least square Tikhonov regularization schemes in both the RKHS-norm and the L 2 -norm. We consider the multi-penalized least square regularization scheme under the general source condition with the polynomial decay of the eigenvalues of the integral operator. One of the motivation for this work is to discuss the convergence issues for widely considered manifold regularization scheme. The optimal convergence rates of multi-penalty regularizer is achieved in the interpolation norm using the concept of effective dimension. Further we also propose the penalty balancing principle based on augmented Tikhonov regularization for the choice of regularization parameters. The superiority of multi-penalty regularization over single-penalty regularization is shown using the academic example and moon data set.
Keywords
Learning theory, Multi-penalty regularization, General source condition, Optimal rates, Penalty balancing principle.