数学科学学院统计与运筹系 学术报告
Avoidance of strict saddle points of nonconvex regularization
王浩 副教授
(上海科技大学)
报告时间: 2024年7月5日 (星期五) 下午2:00-3:30
报告地点:北京航空航天大学沙河校区-国家实验室E 座404 会议室
报告摘要:
In this paper, we consider a class of non-convex and non-smooth sparse optimization problems, which encompass most existing nonconvex sparsity-inducing terms. We show the second-order optimality conditions only depend on the nonzeros of the critical points. We propose two damped iterative reweighted algorithms including the iteratively reweighted algorithm (DIRL1) and the iteratively reweighted (DIRL2) algorithm, to solve these problems. For DIRL1, we show the reweighted subproblem has support identification property so that DIRL1 locally reverts to a gradient descent algorithm around a critical point. For DIRL2, we show the solution map of the reweighted subproblem is differentiable and Lipschitz continuous everywhere. Therefore, the map of DIRL1 and DIRL2 and their inverse are Lipschitz continuous, and the strict saddle points are their unstable fixed points. By applying the stable manifold theorem, these algorithms are shown to converge only to local minimizers with randomly initialization when the strictly saddle point property is assumed.
报告人简介: 王浩博士于2015年5月在美国Lehigh University工业工程系获得博士学位,导师为Frank E. Curtis,并于2010年和2007年在北京航空航天大学数学与应用数学系分别获得理学硕士和学士学位。王浩博士于2016年3月加入上海科技大学信息与技术学院。主要研究兴趣是运筹、计算机科学和统计等学科应用中出现的非线性规划问题。主要工作涉及算法设计、收敛性分析和软件研发。 当前感兴趣的主题是随机非线性优化算法、机器学习中的正则化技术、低秩矩阵补全、潜在不可行问题的非线性优化算法。
邀请人: 刘红英