当前位置: X-MOL 学术Quantum › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Tight and Efficient Gradient Bounds for Parameterized Quantum Circuits
Quantum ( IF 5.1 ) Pub Date : 2024-09-25 , DOI: 10.22331/q-2024-09-25-1484
Alistair Letcher, Stefan Woerner, Christa Zoufal

The training of a parameterized model largely depends on the landscape of the underlying loss function. In particular, vanishing gradients are a central bottleneck in the scalability of variational quantum algorithms (VQAs), and are known to arise in various ways. However, a caveat of most existing gradient bound results is the requirement of t-design circuit assumptions that are typically not satisfied in practice. In this work, we loosen these assumptions altogether and derive tight upper and lower bounds on loss and gradient concentration for a large class of parameterized quantum circuits and arbitrary observables, which are significantly stronger than prior work. Moreover, we show that these bounds, as well as the variance of the loss itself, can be estimated efficiently and classically-providing practical tools to study the loss landscapes of VQA models, including verifying whether or not a circuit/observable induces barren plateaus. In particular, our results can readily be leveraged to rule out barren plateaus for a realistic class of ansätze and mixed observables, namely, observables containing a non-vanishing local term. This insight has direct implications for hybrid Quantum Generative Adversarial Networks (qGANs). We prove that designing the discriminator appropriately leads to 1-local weights that stay constant in the number of qubits, regardless of discriminator depth. This implies that qGANs with appropriately chosen generators do not suffer from barren plateaus even at scale-making them a promising candidate for applications in generative quantum machine learning. We demonstrate this result by training a qGAN to learn a 2D mixture of Gaussian distributions with up to 16 qubits, and provide numerical evidence that global contributions to the gradient, while initially exponentially small, may kick in substantially over the course of training.

中文翻译:


参数化量子电路的紧密且有效的梯度界限



参数化模型的训练很大程度上取决于底层损失函数的情况。特别是,消失梯度是变分量子算法(VQA)可扩展性的中心瓶颈,并且已知以多种方式出现。然而,大多数现有梯度界限结果需要注意的是,t 设计电路假设的要求在实践中通常无法满足。在这项工作中,我们完全放松了这些假设,并为一大类参数化量子电路和任意可观测量导出了损失和梯度浓度的严格上限和下限,这比之前的工作要强得多。此外,我们表明,可以有效且经典地估计这些界限以及损失本身的方差,为研究 VQA 模型的损失景观提供实用工具,包括验证电路/可观测值是否会导致贫瘠平台。特别是,我们的结果可以很容易地用来排除现实类别的 ansätze 和混合可观测量(即包含非消失局部项的可观测量)的贫瘠高原。这一见解对混合量子生成对抗网络(qGAN)有直接影响。我们证明,无论鉴别器深度如何,适当设计鉴别器都会导致 1-局部权重在量子位数量上保持恒定。这意味着具有适当选择的生成器的 qGAN 即使在规模上也不会陷入停滞状态,这使得它们成为生成量子机器学习应用的有希望的候选者。 我们通过训练 qGAN 来学习最多 16 个量子位的二维高斯分布混合来证明这一结果,并提供数值证据来证明梯度的全局贡献虽然最初呈指数级小,但可能会在训练过程中大幅发挥作用。
更新日期:2024-09-25
down
wechat
bug