[융합과학부]Accelerated Gradient Methods for Large-scale Convex Optimization

Speaker: 김동환 박사, Dartmouth College

Date & Time: 2018년 5월 24일 (목) 17:00

Where: 융대원 D-123호

Abstract

Many modern applications such as machine learning, inverse problems, and control require solving large-dimensional optimization problems. First-order methods such as a gradient method are widely used to solve such large-scale problems, since their computational cost per iteration mildly depends on the problem dimension. However, they suffer from slow convergence rates, compared to second-order methods such as Newton’s method. Therefore, accelerating a gradient method has received a great interest in the optimization community, and this led to the development and extension of a conjugate gradient method, a heavy-ball method, and Nesterov’s fast gradient method, which we review in this talk. This talk will then present new proposed accelerated gradient methods, named optimized gradient method (OGM) and OGM-G, that have the best known worst-case convergence rates for smooth convex optimization among any accelerated gradient methods.

Biography

Donghwan Kim received B.S. in Electrical Engineering from Seoul National University in 2009, and M.S. and Ph.D in Electrical Engineering: Systems from the University of Michigan in 2011 and 2014 respectively.
He is currently a research instructor in the Department of Mathematics at Dartmouth College.
His research interests are in the area of continuous optimization and inverse problems.

 

초청자 : 융합과학부 지능형융합시스템전공 전동석 교수 (연락처 : 031-888-9136, djeon1@snu.ac.kr)

2018-05-21|