Albarrán-Arriagada, F and Retamal, J C and Solano, E and Lamata, L (2020) Reinforcement learning for semi-autonomous approximate quantum eigensolver. Machine Learning: Science and Technology, 1 (1). 015002. ISSN 2632-2153
Albarrán-Arriagada_2020_Mach._Learn.__Sci._Technol._1_015002.pdf - Published Version
Download (1MB)
Abstract
The characterization of an operator by its eigenvectors and eigenvalues allows us to know its action over any quantum state. Here, we propose a protocol to obtain an approximation of the eigenvectors of an arbitrary Hermitian quantum operator. This protocol is based on measurement and feedback processes, which characterize a reinforcement learning protocol. Our proposal is composed of two systems, a black box named environment and a quantum state named agent. The role of the environment is to change any quantum state by a unitary matrix ${\hat{U}}_{E}={{\rm{e}}}^{-{\rm{i}}\tau {\hat{{ \mathcal O }}}_{E}}$ where ${\hat{{ \mathcal O }}}_{E}$ is a Hermitian operator, and τ is a real parameter. The agent is a quantum state which adapts to some eigenvector of ${\hat{{ \mathcal O }}}_{E}$ by repeated interactions with the environment, feedback process, and semi-random rotations. With this proposal, we can obtain an approximation of the eigenvectors of a random qubit operator with average fidelity over 90% in less than 10 iterations, and surpass 98% in less than 300 iterations. Moreover, for the two-qubit cases, the four eigenvectors are obtained with fidelities above 89% in 8000 iterations for a random operator, and fidelities of 99% for an operator with the Bell states as eigenvectors. This protocol can be useful to implement semi-autonomous quantum devices which should be capable of extracting information and deciding with minimal resources and without human intervention.
Item Type: | Article |
---|---|
Subjects: | STM Library Press > Multidisciplinary |
Depositing User: | Unnamed user with email support@stmlibrarypress.com |
Date Deposited: | 30 Jun 2023 05:13 |
Last Modified: | 12 Sep 2024 05:53 |
URI: | http://journal.scienceopenlibraries.com/id/eprint/1685 |