Invited Talks

A Rigorous and Robust Quantum Speed-Up in Supervised Machine Learning

  • Yunchao Liu, UC Berkeley
  • Time: 2021-02-26 11:00
  • Host: Dr. Xiao Yuan
  • Venue: Online Talk


Over the past few years several quantum machine learning algorithms were proposed that promise quantum speed-ups over their classical counterparts. Most of these learning algorithms either assume quantum access to data -- making it unclear if quantum speed-ups still exist without making these strong assumptions, or are heuristic in nature with no provable advantage over classical algorithms. In this paper, we establish a rigorous quantum speed-up for supervised classification using a general-purpose quantum learning algorithm that only requires classical access to data. Our quantum classifier is a conventional support vector machine that uses a fault-tolerant quantum computer to estimate a kernel function. Data samples are mapped to a quantum feature space and the kernel entries can be estimated as the transition amplitude of a quantum circuit. We construct a family of datasets and show that no classical learner can classify the data inverse-polynomially better than random guessing, assuming the widely-believed hardness of the discrete logarithm problem. Meanwhile, the quantum classifier achieves high accuracy and is robust against additive errors in the kernel entries that arise from finite sampling statistics.


Yunchao Liu is a PhD student in the Theory Group at UC Berkeley, advised by Umesh Vazirani. He received his Bachelor's degree from Yao Class at Tsinghua University, working with Xiongfeng Ma. His research interests are in quantum information and computation. Recently, he has been working on establishing theoretical foundations for near-term quantum advantage, including algorithms, complexity, and benchmarking.


  • Admission


Online Zoom Meeting:
Meeting ID: 666 7873 3707
Password: 719808