Code for the paper "Hybrid Quantum-Classical Scheduling for Accelerating Neural Network Training with Newton's Gradient Descent"
- Authors: Pingzhi Li, Junyu Liu, Hanrui Wang, and Tianlong Chen
- Paper: arXiv
Optimization techniques in deep learning are predominantly led by first-order gradient methodologies, such as SGD (Stochastic Gradient Descent). However, neural network training can greatly benefit from the rapid convergence characteristics of second-order optimization. Newton's GD (Gradient Descent) stands out in this category, by rescaling the gradient using the inverse Hessian. Nevertheless, one of its major bottlenecks is matrix inversion, which is notably time-consuming in
Matrix inversion can be translated into solving a series of linear equations. Given that quantum linear solver algorithms (QLSAs), leveraging the principles of quantum superposition and entanglement, can operate within a
We propose
Our evaluation showcases the potential for
@misc{li2024hybrid,
title={Hybrid Quantum-Classical Scheduling for Accelerating Neural Network Training with Newton's Gradient Descent},
author={Pingzhi Li and Junyu Liu and Hanrui Wang and Tianlong Chen},
year={2024},
eprint={2405.00252},
archivePrefix={arXiv},
primaryClass={quant-ph}
}