About this Event
250 Hutchison Rd, Rochester, NY 14620#computerscience
Learning through Hypergradients
Hypergradient generally refers to the gradient of a validation loss w.r.t model hyperparameters. While it is usually adopted to maximize the utility – through tuning learning rates, momentum, dropout, etc., we are also interested in applying hypergradients to enable automated and efficient solutions for a wide range of bilevel problems. Specifically, we define hyperparameters adapting to different contexts, and design and develop various hypernetworks to compute/approximate their hypergradients. This talk will introduce our resent research efforts to deploy deep learning models with hypergradients. To be specific, we propose a gradient-based evolution search framework to solve the traditional hyperparameter optimization (HPO) problem on data augmentations and regularizations, by incorporating hypergradients into the mutation step. The proposed method instantiates a learnable trade-off between local and global HPO solutions. Followed by this work, we further introduce how to leverage hypergradients to realize automated graph neural networks and meta weights for node classification and fair ranking tasks, respectively. Eventually, we will showcase a practical application of hypergradients for the computational imaging problem.
Dr. Zhiqiang Tao is currently a tenure-track Assistant Professor in the School of Information at the Rochester Institute of Technology. He was an Assistant Professor in the Department of Computer Science at Santa Clara University from 2020 to 2022. He obtained his Ph.D. degree at Northeastern University in 2020, under the supervision of Prof. Yun Fu. He was a former research intern with Adobe, CA, and Alibaba’s DAMO academy, WA. He has published over 50 papers on peer-reviewed journals and conferences, including IEEE TPAMI, TNNLS, TIP, TCYB, TKDD, NeurIPS, ICLR, KDD, SIGIR, CVPR, ICCV, ECCV, AAAI, IJCAI, ACM MM, CIKM, ICDM, SDM, etc. He serves as Associate Editor of IEEE TCSVT and Neurocomputing and has served as reviewers and (senior) PC members for prestige journals and international conferences. He won the 3rd place award in KDD Cup AutoML track in 2019. His research interests lie in pursuing reliable and efficient AI by interweaving hyperparameter optimization, uncertainty quantification, and model compression.
0 people are interested in this event