Please use this identifier to cite or link to this item: http://hdl.handle.net/123456789/3369
Title: Least squares optimization with L1-norm regularization
Authors: Nkansah, Henrietta
Keywords: Smoothing Approximations
Sparsity Induced Systems
Least Squares Optimization
L2-Norm Regularization
L1-Norm Regularization
Non-Smoothing Approximations
Issue Date: Oct-2017
Publisher: University of Cape Coast
Abstract: The non-differentiable L1-norm penalty in the L1-norm regularized least squares problem poses a major challenge to obtaining an analytic solution. The study thus explores smoothing and non-smoothing approximations that yields differentiable loss functional that ensures a close-form solution in over-determined systems. Three smoothing approximations to the L1-norm penalty have been examined. These include the Quadratic, Sigmoid and Cubic Hermite. Tikhonov regularization is then applied to the resulting loss function. The approximations are a modification of the Lee’s approximation to the L1-norm term. The regularized solution using this approximation has been presented in various forms. Using the Hilbert 12×12 matrix, it is found that for all three methods, a good approximation to the exact solution converges at a regularization parameter µ = 10−30. The solutions show an accuracy to nine digits. In each approximation, a suitable value of the parameter is obtained for which the absolute value function approximates the L1-norm penalty. The results of the Modified Newton’s method based on the Lee’s approximation however shows an accuracy of at most two digits. The solution of the smoothing methods also compares favourably with l1 ls method. Analytic solution of the L1-norm problem is also obtained by means of the sub-gradient method, after casting the constrained formulation as unconstrained. Attempt at achieving sparsity of the Least Absolute Shrinkage and Selection Operator (LASSO) solution has been made in two ways. The initial solution is expressed in terms of the singular value decomposition so that by truncating smaller singular values,the desired sparsity is achieved using suitable regularization parameter obtained by the K-fold cross validation of the fit. In another way,the LASSO solution itself has been induced to ensure sparsity. The results show that the LASSO formulation and solution must be appropriately designed for certain type of data sets, particularly those that are severely ill-conditioned and those with monotone trends.
Description: xv, 240p.: ill.
URI: http://hdl.handle.net/123456789/3369
ISSN: 23105496
Appears in Collections:Department of Mathematics & Statistics

Files in This Item:
File Description SizeFormat 
NKANSAH 2017.pdfPhD Thesis3.58 MBAdobe PDFView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.