Abstract
We consider the problem of deciding whether a given state preparation, i.e., a source of quantum states, is accurate; namely, it produces states close to a target one within a prescribed threshold. While most of the result in the literature considers the case in which the measurement operators can be arbitrarily chosen depending on the target state, obtaining favorable (Heisenberg) scaling, we focus on the case in which the measurements can be only chosen from a given set. We show that, in this case, the order of measurements is critical for quickly assessing accuracy. We propose and compare different strategies to compute optimal or suboptimal measurement sequences either relying solely on a priori information, i.e., the target state for state preparation, or actively adapting the sequence to the previously obtained measurements. Numerical simulations show that the proposed algorithms reduce significantly the number of measurements needed for verification and indicate an advantage for the adaptive protocol especially assessing faulty preparations.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Due to the unavoidable errors, noise or decoherence, realistic quantum devices do not always behave as expected. Various metrics can be used to characterize and benchmark a quantum device [1]. In this work, we focus on devices expected to reliably produce some target state. Given an unknown quantum state in a d-dimensional Hilbert space \({\mathcal {H}}_d\), \(d^2-1\) measurements are necessary in general for a full tomographic reconstruction of the corresponding density matrix [2].
However, in many situations, such as quantum telecommunication, quantum state preparation and quantum computation, we are more concerned with whether some experimentally accessible quantum state \(\rho _{\textrm{exp}}\) is accurate enough with respect to a target state \(\rho _0\), representing the intended result of the preparation, processing or communication task, rather than fully reconstructing it. This problem is referred to as quantum state certification [3] and has been considered in the literature from multiple viewpoint.
Most of the research on state verification builds upon an hypothesis-testing framework [3, 4]. The main results show that the hypothesis “the unknown state is the target state” can be answered using strategies that achieve a Heisenberg scaling between the precision and the sample complexity. A key hypothesis to obtain these extremely favorable scaling is that the measurement operators used to test need leave the target state invariant, which directly translates in having false negatives with zero probability, so the errors come only from false positive. Moreover, if the measurement can be chosen depending on the state to be verified, these verification protocols only require two measurement settings for assessing an arbitrary bipartite pure state (see, e.g., [5]). The basic strategy can then be extended to include locality constraints, specific classes of target states, adversarial choices in the states to be tested, classical communication and more [4, 6,7,8,9,10,11,12].
In this work, we reconsider this task, in a different, more taxing scenario: We assume that only a finite set of measurement is available and given, independently of the target state. Under this assumption, the previously proposed optimal verification strategies cannot be applied in general, as it is possible that no measurement in the set leaves the target invariant. For this reason, we are going to need more measurements, and worse scaling between accuracy and sample complexity than the situation.
Of course, one way to tackle the problem would be to proceed with a full tomography of \(\rho \) [2] and then decide accuracy consequently. This is in general, however, not efficient as it requires to obtain averages for at least \(d^2-1\) independent observables, and it does not leverage on prior information about the target state \(\rho _0\). For example, if the target state is known to be pure, a smaller number of measurements are required via compressed sensing techniques [13].
In order to improve the verification performance, we here propose procedures that decide whether the state \(\rho \) is accurate within a prescribed tolerance, without necessarily obtaining a full tomography and thus still reducing the number of required observables. The central idea is to order the measurement sequence using the a priori information, so that the first measurements are the most informative when the state to be measured is indeed \(\rho _0.\) The procedure can also be seen as a way to optimize the order of the measured observables in a tomography depending on the best available estimate of the state at hand, in the spirit of [14].
The procedure we propose is of two types: The first ones compute the whole measurement sequence off-line and then uses it choose which measurements to actually perform, stopping as soon as verification can be decided. A crucial aspect, in practice, is in fact the computation of the optimal sequence. The latter is a nontrivial optimization problem that has to be solved in a number of instances that scale combinatorially with the number of measurements, which in turn grows at least quadratically in the dimension of the space. For this reason, even off-line calculation becomes of optimal sequences becomes rapidly impractical. In order to address this problem, we propose iterative algorithms, which determine the best next measurements given the previously chosen ones. Two versions are provided, where the second one relies on a relaxation of the constraints that allows for an analytic treatment. These ways of constructing the sequence, albeit suboptimal, are computationally treatable and offer another advantage: They lend themselves to be used as adaptive strategies, which rely on the previously obtained actual measurements rather than just the target state. In fact, the second type of verification method we propose is an adaptive strategy, where the next measurement is chosen based on the best available estimate given the actual measurement performed to that point. The different methods are tested with a paradigmatic example: a two-qubit state where only local Pauli measurements are available. The results highlight the flexibility of the adaptive method, which performs well even in the case of inaccurate priors.
2 Problem setting and verification criteria
We denote by \({\mathcal {B}}({\mathcal {H}})\) the set of all linear operators on a finite-dimensional Hilbert space \({\mathcal {H}}\). Define \({\mathcal {B}}_{*}({\mathcal {H}}){:}{=}\{X\in {\mathcal {B}} ({\mathcal {H}})|X=X^\dag \}\) and \({\mathcal {B}}_{>0}({\mathcal {H}}){:}{=}\{X\in {\mathcal {B}}({\mathcal {H}})|X> 0\}\). \(\text {Tr}(A)\) indicates the trace of \(A\in {\mathcal {B}}({\mathcal {H}})\). We define \({\mathcal {S}}({\mathcal {H}}_d){:}{=}\{\rho \in {\mathcal {B}}_{*}({\mathcal {H}}_d)|\,\rho \ge 0, \text {Tr}(\rho )=1\}\) as the set of all physical density matrices on \({\mathcal {H}}_d\).
In order to precisely specify the verification task, we introduce the following definition, which depends on the choice of a relevant distance-like function.
Definition 1
(\((\epsilon ,D,\rho _0)\)-accurate) Given a target state \(\rho _0\in {\mathcal {S}}({\mathcal {H}}_d)\) and a (pseudo-)distance function D on \({\mathcal {S}}({\mathcal {H}}_d)\), the density matrix \(\rho \in {\mathcal {S}}({\mathcal {H}}_d)\) is called \((\epsilon ,D,\rho _0)\)-accurate if \(D(\rho ,\rho _0)\le \epsilon \) with \(\epsilon \ge 0\).
Consider a set of observables, represented by Hermitian matrices \(\{A_i\}_{i=1}^R\), where R is a positive integer. This set of observables is called information-complete if \(\{A_i\}_{i=1}^R\) generate the set of all d-dimensional traceless Hermitian matrices. Note that a necessary condition for the observables to be information-complete is \(R\ge d^2\). If \(\{A_i\}_{i=1}^R\) is information-complete and the measurement statistics \(\{\hat{y}_i\}_{i=1}^R\) are known exactly, i.e., \(\hat{y}_i=y_i{:}{=}\text {Tr}(\rho _{\textrm{exp}}A_i)\) with \(i\in \{1,\dots ,R\}\), then there is a unique state compatible with the constraints, that is, the generated state \(\rho _{\textrm{exp}}\). Throughout this paper, we suppose that set of observables is finite, information-complete and fixed. The problem we will be concerned with is the following,
Problem 1
Based on the a priori state \(\rho _0\) and available data \(\{\hat{y_i}\}^{K}_{i=1}\) with \(K\le R\), determine the optimal order of \(A_k\) to verify if the generated state \(\rho _{\textrm{exp}}\) is \((\epsilon ,D,\rho _0)\)-accurate via as few measurements as possible.
In order to introduce the central idea of the work, let us assume for now that a certain sequence of the available observables has been decided. There are two cases in which the verification process can be terminated, establishing whether the generate state is \((\epsilon ,D,\rho _0)\)-accurate or not with a minimum of measurements. Suppose that the measurements are perfect, namely the available data \(y_i\) satisfy \(y_i=\text {Tr}(A_i \rho _{\textrm{exp}})\). Denote by \(\bar{{\textbf{S}}}_i{:}{=}\{\rho \in {\mathcal {S}}({\mathcal {H}}_d)|\,\text {Tr}(\rho A_i)=y_i\}\) the set of states compatible with the measurement data \(y_i\). Based on \(\{y_i\}^{K}_{i=1}\), two criteria can be used to verify if the generated state \(\rho _{\textrm{exp}}\) is \((\epsilon ,D,\rho _0)\)-accurate in each step:
C1. If \(\textrm{min}_{\rho \in \bigcap ^K_{i=1}\bar{{\textbf{S}}}_i} D(\rho ,\rho _0)>\epsilon \), \(\rho _{\textrm{exp}}\) is not \((\epsilon ,D,\rho _0)\)-accurate;
C2. If \(\max _{\rho \in \bigcap ^K_{i=1} \bar{{\textbf{S}}}_i} D(\rho ,\rho _0)\le \epsilon \), \(\rho _{\textrm{exp}}\) is \((\epsilon ,D,\rho _0)\)-accurate.
Depictions of the situations corresponding to the above two criteria C1 and C2 are shown in Figure 1. C1 guarantees that all states compatible with the measurement data are outside of the ball of radius \(\epsilon \) around the target state \(\rho _0\), while C2 ensures that the same states are all inside.
In the following sections, we shall leverage the criteria above in order to devise optimal measurement sequences, or suboptimal ones that present computational advantages and can be adapted to the actual measurement outcomes.
3 Verification of quantum state based on the a priori state
In this section, we first introduce a strategy of determining the measurement sequence \(\textsf{M}\) off-line based only on the a priori target state \(\rho _0\), i.e., without using the measurement data. We next use the sequence \(\textsf{M}\) to verify that the generated state \(\rho _{\textrm{exp}}\) is or is not \((\epsilon ,D,\rho _0)\) accurate according to the criteria C1 and C2. The objective is to perform as few measurements as possible to achieve verification.
3.1 Off-line construction of the optimal measurement sequence
From an experimental point of view, it is arguably easier to determine the whole sequence of measurements before performing them. We shall start by exploring this approach, while the adaptive approach, in which the next measurement is chosen depending on the outcome of the previous ones, will be treated in Section 4.
Denote by \(\textbf{S}_i(\rho _0){:}{=}\{\rho \in {\mathcal {S}}({\mathcal {H}}_d)|\, \text {Tr}(\rho A_i)=\text {Tr}(\rho _0 A_i)\},\) the set of density matrices compatible with the measurement \(A_i\) that we would have if the state was actually \(\rho _0\in {\mathcal {S}}({\mathcal {H}}_d)\). While relying only on prior information, with no true measurements data available, we use \(\textbf{S}_i(\rho _0)\) to replace the constraints \(\bar{\textbf{S}}_i\) in the criteria C1 and C2. Note that \(\textbf{S}_i(\rho _0)=\bar{\textbf{S}}_i\) if the state is perfect generated, i.e., \(\text {Tr}(\rho _0A_i)=y_i\).
Obviously, since \(\rho _0\in \textbf{S}_i(\rho _0)\) for all \(i\in \{1,\dots ,R\}\) by construction, then in this scenario C1 can never be satisfied. Thus, we only exploit C2 to determine the order of measurements. Suppose that the distance function D is continuous on \({\mathcal {S}}({\mathcal {H}}_d)\), e.g., any matrix norm, quantum relative entropy, etc., (see [15, Chapter 9, 11] for standard options), due to the compactness of \(\bigcap _i\textbf{S}_i(\rho _0)\), \(\max _{\rho \in \bigcap _i\textbf{S}_i(\rho _0)} D(\rho ,\rho _0)\) exists.
If the state was actually \(\rho _0\), the minimal amount of measurements that allow to determine that the preparation was indeed accurate would correspond, according to C2, to the minimum n for which there exists a set of measurements indexes \(\textsf{M}_n\subset \{1,\dots ,R\}\) such that
and the optimal sequence would be any permutation of the \(\textsf{M}_n\).
The Algorithm OS could be used to generate one such optimal sequence.
Algorithm OS: Optimal verification Sequence based on \(\rho _0\) | |
|---|---|
\(\bullet \) | Initialization: Define the set \(\textsf{S}=\{1,\dots ,R\}\) and set \(k=1\). |
\(\bullet \) | Step 1: Denote a k elements sequence by \(\textsf{M}_k{:}{=}(m_{k,1},\dots ,m_{k,k})\in \textsf{S}^k\). Compute |
\(\overline{\textsf{M}}_k\in \mathrm{arg\,min}_{\textsf{M}_k\in \textsf{S}^k } \Big ( \max _{\rho \in \bigcap _{i\in \textsf{M}_k}\textbf{S}_i(\rho _0)}D(\rho ,\rho _0)\Big ).\) | |
If \(\max _{\rho \in \bigcap _{i\in \overline{\textsf{M}}_k}\textbf{S}_i(\rho _0)}D(\rho ,\rho _0)\le \epsilon \) stop the procedure. Otherwise, update \(k=k+1\). | |
\(\bullet \) | Step 2: Repeat Step 1 until \(k=d^2\). |
Note that each step of above algorithm is independent, thus for some \(i<j\), \(\overline{\textsf{M}}_i\not \subset \overline{\textsf{M}}_j\). At the end of process, we obtain a sequence of measurements \(\overline{\textsf{M}}_n\) containing \(n\le R\) elements, whose corresponding observables are the optimal choice for the verification of \((\epsilon ,D,\rho _0)\) accuracy of \(\rho _\textrm{exp}=\rho _0\). The order of the elements belonging to \(\overline{\textsf{M}}_n\) is not important. However, the computational complexity of the above algorithm is too large; in order to determine \({\overline{\textsf{M}}}_n\), it needs to solve \(\sum ^n_{k=1}(\begin{array}{c} n\\ k \end{array})=2^n-1\) optimization problems. Moreover, in practice, the generated state \(\rho _{\textrm{exp}}\) is usually different from the target state \(\rho _0\), and thus, the generated measurement sequence \({\overline{\textsf{M}}}_n\) by Algorithm OS with respect to \(\epsilon \) may not be able to verify the accuracy of \(\rho _{\textrm{exp}}.\) To obtain a tomographically complete sequence, one needs to add \(d^2-n\) linearly independent measurements operators from the available set.
3.2 Iterative construction of verification sequences
In order to address the above issues, we propose to construct the sequence of measurements iteratively, based on the previous determined measurement indexes, which can greatly reduce the computational complexity and allow to extend the procedure to the full observable set. The resulting sequence will be in general suboptimal with respect to \(\rho _0\), but still yields an advantage with respect to a random sequence of observables, as shown in Section 5.
3.2.1 Optimization-based approach
The general algorithm we propose works as follows: It starts by evaluating, for each measurement \(A_i\), the maximal distance \(\alpha _i\) with respect to \(\rho _0\) of the states \(\rho \) belonging to \(\textbf{S}_i(\rho _0)\), the set of states that are compatible with the measurement outcome \(\textrm{Tr}(\rho A_i)=\textrm{Tr}(\rho _0 A_i)\). The measurement giving the minimum value of \(\alpha _i\) is selected as first measurement \(A_{m_1}\), and the corresponding maximum distance is \(\alpha ^1_{m_1}\). Now, the next measurement \(A_{m_{i+1}}\) is chosen so that it is linearly independent on the previously chosen ones and at the same time minimizes the maximum distance of the new compatible set with the measurement of all the previously selected \(A_{m_1},\ldots ,A_{m_i}\). The minimum worst-case distance among compatible states \(\alpha ^n_i,\) with n indicating the iteration and i the selected measurement, is chosen as an indicator of how likely it is that checking C2 will allow us to determine whether the actual state is \((\epsilon ,D,\rho _0)\)-accurate.
A more formal form of the above algorithm is summarized as Algorithm IOS.
Algorithm IOS: Iteratively Optimized Sequence based on \(\rho _0\) | |
|---|---|
\(\bullet \) | Initialization: Define the sets \(\textsf{M}=\emptyset \) and \(\textsf{S}=\{1,\dots ,R\}.\) Set \(k=0\). |
\(\bullet \) | Step 1: Define \(\bar{\textsf{S}}\) as the set of all \(i\in \textsf{S}\) such that \(A_i\notin \text {span} \{A_n\}_{n\in \textsf{M}}\). Set \(k=k+1\). For all \(i\in \bar{\textsf{S}}\), compute |
\(\alpha ^k_i{:}{=}\max _{\rho \in \bigcap _{n\in \textsf{M}}\textbf{S}_n(\rho _0)\cap \textbf{S}_i(\rho _0) } D(\rho ,\rho _0).\) | |
If \(\textrm{min}_{i\in \bar{\textsf{S}}}\alpha ^k_i=0\), set \(\textsf{M}=\textsf{M}\cup \bar{\textsf{S}}\) and stop the process: In this case, \(\rho _0\) must belong to the span of the selected measurements. | |
Otherwise, compute \(\textrm{arg min}_{i\in \bar{\textsf{S}}}\alpha ^k_i\). If \(\mathrm{arg~min}\) output a single integer, set \(m_k=\mathrm{arg\,min}_{i\in \bar{\textsf{S}}}\alpha ^k_i\). If \(\mathrm{arg~min}\) output multiple integers, designate a unique \(m_k\) in that set, according to some deterministic rule or at random: In this case, the criteria we consider do not lead to a preferred choice. | |
Update \(\textsf{M}=\textsf{M}\cup \{m_k\}\), \(\textsf{S}=\textsf{S}\setminus \{m_k\}\). | |
\(\bullet \) | Step 2: Repeat Step 1 until \(\text {card}(\textsf{M})=d^2\) |
At the end of the procedure, \(\textsf{M}\) is a ordered sequence of measurements, from the most to the less informative based on a priori state. Note that, at the end of Step 2, we obtain a sequence of measurements containing n linearly independent observables, from which the target state \(\rho _0\) can be reconstructed via tomography. By construction \(\alpha ^{1}_{m_1}\ge \alpha ^{2}_{m_2}\ge \cdots \ge \alpha ^{n}_{m_{n}}\) is a decreasing sequence of the maximum distance from \(\rho _0\) of the states compatible with the measurement. However, in practice \(\rho _{\textrm{exp}}\ne \rho _0\), for the case of \(n<d^2\), the n observables may not be sufficient to verify the accuracy of \(\rho _{\textrm{exp}}\). Thus, we need to complete the sequence with additional \(d^2-n\) linearly independent observables, which we can choose at random or according to other criteria.
3.2.2 Analytic approach based on distance bound
The computational complexity of Algorithm IOS is still highly dependent on the number of optimization problems to be solved that, albeit reduced with respect to the optimal a priori sequence, still increases quadratically with the dimension of the Hilbert space. To address this issue, we provide an approximation of Algorithm IOS when the Hilbert–Schmidt distance is chosen as the distance function. In this case, we are not ordering the measurements by evaluating the exact maximal distance of the set of states compatible with the measurement (i.e., the \(\alpha ^k_i\) values), but instead by evaluating an upper bound of such distance that can be expressed analytically.
The Hilbert–Schmidt distance is defined as
In the following proposition, we provide an upper bound of the distance on the target state \(\rho _0\) for states \(\sigma \) that are compatible with \(\rho _0\) according to a set of observables \(\{A_{i}\}^K_{i=1}\) where \(K\le R\).
Proposition 1
Given a state \(\rho _0\in {\mathcal {S}}({\mathcal {H}}_d)\) and a set of observables \(A_{i}\in {\mathcal {B}}_{*}({\mathcal {H}}_d)\) with \(i\in \{1,\dots ,K\}\), for any \(\sigma \in \bigcap ^K_{i=1}\textbf{S}_{i}(\rho _0)\), then
where \(\varrho _K\) is the projection of \(\rho _0\) in the subspace spanned by the operators \(\{A_i\}^K_{i=1}\).
Proof
The square of the Hilbert–Schmidt distance can be written as \(d^2_{HS}(\sigma ,\rho _0)=\textrm{Tr}(\rho _0^2)+\textrm{Tr}(\sigma ^2) -2\textrm{Tr}(\sigma \rho _0) \le 1+\textrm{Tr}(\rho _0^2) -2\textrm{Tr}(\sigma \rho _0)\). Any state \(\sigma \in \bigcap _i\textbf{S}_i(\rho _0)\) satisfies \(\textrm{Tr}(\sigma A_{i})=\textrm{Tr}(\rho _0 A_{i})\) for all \(i\in \{1,\dots ,K\}\). Therefore, the orthogonal projection of \(\rho _0\) and \(\sigma \) on the space spanned by the operators \(\{A_i\}^K_{i=1}\) is the same: We can defined it as \(\varrho _K\). We can thus write \(\rho _0=\varrho _K+\varrho _K^\perp \) and \(\sigma =\varrho _K+\varsigma _K^\perp \) with \(\varrho _K^\perp \) and \(\varsigma _K^\perp \) orthogonal to \(\varrho _K\) according to the Hilbert–Schmidt inner product, i.e., \(\langle \rho ,\sigma \rangle _{HS}=\textrm{Tr}(\rho ^*\sigma )\). Therefore, \(\textrm{Tr}(\rho _0\sigma )=\textrm{Tr}(\varrho _K^2)+\textrm{Tr}(\varrho _K^\perp \varsigma _K^\perp )\). Moreover, the equations \(\textrm{Tr}(\rho _0^2)=\textrm{Tr}(\varrho _K^2)+\textrm{Tr}[(\varrho _K^\perp )^2]\) and \(\textrm{Tr}(\sigma ^2)=\textrm{Tr}(\varrho _K^2)+\textrm{Tr}[(\varsigma _K^\perp )^2]\le 1\) imply that \(\textrm{Tr}[(\varrho _K^\perp )^2]=\textrm{Tr}(\rho _0^2)-\textrm{Tr}(\varrho _K^2)\) and \(\textrm{Tr}[(\varsigma _K^\perp )^2]\le 1-\textrm{Tr}(\varrho _K^2)\). From the Cauchy–Schwarz inequality, we have that \(\textrm{Tr}(\varrho _K^\perp \varsigma _K^\perp )\ge - \sqrt{\textrm{Tr} [(\varrho _K^\perp )^2]\textrm{Tr}[(\varsigma _K^\perp )^2]} \ge -\sqrt{\textrm{Tr}(\rho _0^2)-\textrm{Tr}(\varrho _K^2)}\sqrt{1-\textrm{Tr}(\varrho _K^2)}\). We have therefore proved that
and the main proposition follows. \(\square \)
Remark 1
We would like to point out that if the target state \(\rho _0\) is pure (i.e., \(\textrm{Tr}(\rho _0^2)=1\)), the upper bound given in (1) simplifies to
Moreover, a similar bound also holds when the Bures metric is employed and the target state \(\rho _0\) is pure. Indeed, when \(\rho _0\) is pure, the Bures distance is written as \(d_B(\rho _0,\sigma )=\sqrt{2(1-\sqrt{\textrm{Tr}(\rho _0\sigma )}}\). Therefore, by following similar step it is possible to demonstrate that, for pure \(\rho _0\) for which \(\textrm{Tr}(\varrho _K^2)\ge 1/2\) we have
Lastly, the bound (1) can be interpreted geometrically. The states \(\sigma \) are written as \(\sigma =\varrho _K+\varsigma _K^\perp \) with fixed \(\varrho _K\). Therefore, the states \(\sigma \) are contained within a ball centered in \(\varrho _K\) and radius \(R_K=\max \sqrt{\textrm{Tr}[(\varsigma _K^\perp )^2]}= \sqrt{1-\textrm{Tr}(\varrho _K^2)}\). The state \(\rho _0=\varrho _K+\varrho _K^\perp \) also belongs to such ball, but its distance from the center is given by \(d_K=d_{HS}(\rho _0,\varrho _K)=\sqrt{\textrm{Tr}[(\varrho _K^\perp )^2]}=\sqrt{\textrm{Tr}(\rho _0^2)-\textrm{Tr}(\varrho _K^2)}\). Therefore, the maximum distance between \(\rho _0\) and \(\sigma \) is indeed bounded by \(R_K+d_K\), as in (1). Notice that, by starting from a set of linear independent observables \(\{A_i\}\), adding an extra observable \(A_j\) will improve the bound.
Proposition 2
Assume we have fixed the first \(\{A_i\}_{i=1}^{K}\) and we add a further measurement operator \(A_{K+1}\). Let \(\{\Gamma _i\}\) be an orthonormal basis of the space spanned by the \(\{A_i\}_{i=1}^{K}\). Define \(A^\perp _{K+1}=A_{K+1}-\sum _i\textrm{Tr}(\Gamma _iA_{K+1})\Gamma _i.\) Then, the projected state becomes:
The latter also implies \(\Vert \varrho _{K+1}\Vert ^2_{HS}=\Vert \varrho _{K}\Vert ^2_{HS}+\frac{\textrm{Tr}^2(\rho _0 A^\perp _{K+1})}{\Vert A^\perp _{K+1}\Vert ^2_{HS}}\).
Proof
We can write \(\rho _0=\varrho _K+\alpha A^\perp _{K+1}+\tau ^\perp _{K+1},\) with \(A^\perp _{K+1}\) orthogonal to all \(A_i\)’s and \(\tau ^\perp _{K+1}\) orthogonal to both \(\varrho _K\) and \(A^\perp _{K+1}\). Since \(\textrm{Tr}(\rho _0 A_{K+1})=\textrm{Tr}(\varrho _K A_{K+1})+\alpha \textrm{Tr}(A_{K+1} A^\perp _{K+1})\), we can determine \(\alpha =\textrm{Tr}[(\rho _0-\varrho _K)A_{K+1}]/\textrm{Tr}(A_{K+1}A_{K+1}^\perp )\). Thus, the projection of \(\rho _0\) into the subspace spanned by the \(\{A_i\}\) and \(A_{K+1}\) is given by:
More in detail, write \(\varrho _K{:}{=}\sum _n\text {Tr}(\rho _0\Gamma _n)\Gamma _n\) and \(A_{K+1}=A^\perp _{K+1}+\Omega _{K+1}\) with \(\Omega _{K+1}{:}{=}\sum _n\text {Tr}(A_{K+1}\Gamma _n)\Gamma _n\), where \(\text {Tr}(\Gamma _n\Gamma _m)=\delta _{n,m}\). We have \(\text {Tr}(\varrho _K A^\perp _{K+1})=0\), \(\text {Tr}(\rho _0 \Omega _{K+1}) = \sum _n\text {Tr}(A_{K+1}\Gamma _n)\text {Tr}(\rho _0\Gamma _n)\) and
From the latter, we have that:
Hence,
\(\square \)
Notice that the rhs of (1) represents an upper bound on the parameter \(\alpha ^k_i\) defined in Algorithm IOS. Since \(\Vert \varrho _K\Vert _{HS}=\sqrt{\textrm{Tr}(\varrho _K^2)}\), according to Proposition 1, the norm \(\Vert \varrho _K\Vert _{HS}\) of the projection \(\varrho _K\) of \(\rho _0\) over the subspace spanned by a subset of observables \(\{ A_i\}\) is an useful parameter to optimize the sequence of the measurements. The larger is \(\Vert \varrho _K\Vert _{HS}\), the lower the upper bound on \(d_{HS}(\rho _0,\sigma )\). Therefore, the measurement sequence should be chosen in order to maximize the norm of such projection at each step, since the upper bound (1) is monotonically non-increasing with respect to the norm of the projection. To this aim, it is sufficient to select an observable \(A_{K+1}\) which maximizes the value of \(\frac{\textrm{Tr}^2(\rho _0 A^\perp _{K+1})}{\Vert A^\perp _{K+1}\Vert ^2_{HS}}\) at each step.
A more formal form of the above algorithm is summarized as Algorithm IAS.
Algorithm IAS: Iterative Sequence based on \(\rho _0\) and the Analytic bound | |
|---|---|
\(\bullet \) | Initialization: Define the sets \({\textsf{M}}=\emptyset \) and \({\textsf{S}}=\{1,\dots ,R\}\). Set \(k=1\). |
\(\bullet \) | Step 1: For all \(j\in {\textsf{S}}\), compute \(A_j^\perp =A_j-\sum _{i\in \textsf{M}}{\textrm{Tr}}(A_j\Gamma _i)\Gamma _{i}\), and \( \omega ^{(k)}_{j}=\frac{\textrm{Tr}^2(\rho _0 A^\perp _j)}{\Vert A^\perp _j\Vert _{HS}^2} \) for all \(j\in {\textsf{S}}\). Then, define the index \( m_k \in \mathrm{arg\,max}_{j\in {\textsf{S}}}\omega ^{(k)}_{j}, \) and the matrix \( \Gamma _{m_k}=\frac{A^\perp _{m_k}}{\Vert A^\perp _{m_k}\Vert _{HS}}. \) Update \(\textsf{M}=\textsf{M}\cup \{m_k\}\), \(\textsf{S}=\textsf{S}\setminus \{m_k\}\). Set \(k=k+1\). |
\(\bullet \) | Step 2: Repeat Step 1 until \(\text {card}(\textsf{M})=d^2\). |
Note that if \(\omega ^{(k)}_{j}=0\), then \(\rho _0\in \text {span}\{\Gamma _{m_1},\dots ,\Gamma _{m_{k-1}}\}\). If the \(\textrm{arg}\max \) in the algorithm above produces more than a single index, one is chosen at random in the set. The sequence is generated by increasing as much as possible in each cycle the value of \(\Vert \varrho _k\Vert _{HS}\). At the end of the procedure, \(\textsf{M}\) corresponds to an ordered sequence of \(d^2\) linearly independent measurement operators based on the upper bound on the distance from \(\rho _0\) provided above.
3.3 Verification algorithm based on the measurement sequence
Once we obtained the measurement sequence \(\textsf{M}\) using one of the algorithms above, we can perform the Algorithm VM to verify whether the generated state \(\rho _{\textrm{exp}}\) is \((\epsilon ,D,\rho _0)\)-accurate according to C1 and C2.

Remark 2
At the end of the above algorithm, if the procedure ends with \(k=d^2\), we can reconstruct the generated state \(\rho _{\textrm{exp}}=\sum _{i\in \textsf{N}}c_i A_i\) where \(\{c_i\}_{i\in \textsf{N}}\) can be computed, for example, by

While all measurements need to be performed in this case, and there is no advantage in having ordered them, it is of course not possible to know in advanced that this will be the case. Moreover, the computational overhead from ordering measurements becomes negligible if the verification task has to be performed many times.
4 Adaptive quantum state verification
In the previously proposed algorithms, the measurement sequence was determined off-line (i.e., without performing any measurement) by only leveraging the information on the a priori state \(\rho _0\). Here, we optimize the verification procedure Algorithm IOS and Algorithm IAS by also exploiting the measurement data at each step in addition to the a priori state to determine the next measurement and then verify the state. We call such protocol adaptive verification.
For now, suppose that the measurements are perfect: Namely the sampled output averages correspond to the true expected values for the actual state. We initialize the algorithm as same as in Algorithm IOS, since before perform the measurements, the a priori state is the only accessible information. Compute \(\alpha ^1_i{:}{=}\max _{\rho \in \textbf{S}_i(\rho _0)} D(\rho ,\rho _0)\) for all \(i\in \{1,\dots ,R\}\) and \(m_1\in \mathrm{arg~min}_{i\in \{1,\dots ,R\}}\alpha ^1_i\). If \(\mathrm{arg~min}\) cannot assign an unique \(m_1\), then we consider the following rule: select an observable at random among those indicated by the criterion of Algorithm IAS, namely those maximizing \(\text {Tr}^2(\rho _0 A^\perp _{j})/\Vert A_j^\perp \Vert ^2_{HS}\). Then, we perform the measurement \(A_{m_1}\) and obtain an empirical estimate of \(y_{m_1}=\text {Tr}(\rho _{\textrm{exp}}A_{m_1})\). For the sake of simplicity in presenting the algorithm, we shall here assume we actually obtain the exact value \(y_{m_1}.\) The case of imperfect estimates can be treated along the same lines. In order to test both criteria C1 and C2, we compute
If \(\omega _1>\epsilon \), then \(\rho _{\textrm{exp}}\) is not \((\epsilon ,D,\rho _0)\)-accurate and if \(\Omega _1 \le \epsilon \), then \(\rho _{\textrm{exp}}\) is \((\epsilon ,D,\rho _0)\)-accurate. Otherwise, we determine an estimate of \(\rho _{\textrm{exp}}\) based on the measurement data \(y_{m_1}\) by \(\rho _1=\mathrm{arg~min}_{\rho \in \bar{\textbf{S}}_{m_1}} f_{\rho _0}(\rho )\), where \(f_{\rho _0}(\rho )\) is a continuous function such that \(\rho _0=\mathrm{arg~min}_{\rho \in {\mathcal {S}}({\mathcal {H}}_d)} f_{\rho _0}(\rho )\), quantifying information distance between \(\rho \in {\mathcal {S}}({\mathcal {H}}_d)\) and \(\rho _0\in {\mathcal {S}}({\mathcal {H}}_d)\). Common choices for f can be the quantum relative entropy [14], or any distance function on \({\mathcal {S}}({\mathcal {H}}_d)\) [15, Chapter 9]. Strictly convex functions guarantee the uniqueness of the minimum. For all \(i\in \{1,\dots ,R\}\setminus \{m_1\}\), according to the criteria C1 and C2, we compute
where \(\Delta ^1_i\ge \delta ^1_i\ge 0\) and \(\textbf{S}_i(\rho _1)=\{\rho \in {\mathcal {S}}({\mathcal {H}}_d)|\, \text {Tr}(\rho A_i)=\text {Tr}(\rho _1 A_i)\}\). Notice that the constrained set now is computed for \(\rho _1,\) which depends on the actual measurement outcomes. Intuitively, the smaller \(\epsilon -\delta ^1_i\) (resp. \(\Delta ^1_i-\epsilon \)) is, the more likely C1 (resp. C2) is verified (see Figure 2).
If for some i we have that \(\Delta ^1_i \ge \delta ^1_i>\epsilon \) or \(\epsilon >\Delta ^1_i \ge \delta ^1_i\), it means that choosing the corresponding measurement is expected to bring the compatible set closer to verify criteria C1 or C2, respectively. However, if there exists i such that \(\delta ^1_i = 0\), it implies that \(\textrm{min}\{\epsilon -\delta ^1_i,\Delta ^1_i-\epsilon \}=\epsilon \) and \(\rho _0\in \textbf{S}_{i}(\rho _1)\cap \bar{\textbf{S}}_{m_1}\), which means that C1 cannot yield the conclusion. Thus, if \(\delta ^1_i = 0\) for all i, only \(\Delta ^1_i\) provides the information for the selection of the next measurement. Therefore, in order to maximize the possibility of the successful verification, we set
If \(\mathrm{arg~min}\) cannot assign an unique \(m_2\), then we can select one by employing the idea of Algorithm IAS, that is to select an observable at random among those which maximize \(\text {Tr}^2(\rho _1 {A}^{\perp }_{j})/\Vert A_j^\perp \Vert _{HS}^2\).
Then, the whole procedure of verification can be defined recursively.
Remark 3
Note that, at each step, determining an estimate \(\rho _k\) of \(\rho _{\textrm{exp}}\) solves the quantum state tomography [2] based on the partial information, the obtained sequence \(\{\rho _k\}^R_{k=1}\) converges to \(\rho _{\textrm{exp}}\), since \(\rho _R=\bigcap ^R_{i=1}\bar{\textbf{S}}_{i}=\rho _{\textrm{exp}}\) and the measurements are supposed to be perfect.
We summarize the algorithm of adaptive verification with perfect measurements as Algorithm AV.
Algorithm AV: Adaptive Verification | |
|---|---|
\(\bullet \) | Initialization: Define the sets \(\textsf{M}=\emptyset \) and \(\textsf{S}=\{1,\dots ,R\}\), and compute |
\(\alpha ^1_i{:}{=}\max _{\rho \in \textbf{S}_i(\rho _0)} D(\rho ,\rho _0)\) | |
for all \(i\in \textsf{S}\) and \(\mathrm{arg~min}_{i\in \textsf{S}}\alpha ^1_i\). | |
If \(\mathrm{arg~min}\) output a single integer, set \(m_1=\textrm{arg min}_{i\in {\textsf{S}}} \alpha ^1_i\). If \(\mathrm{arg~min}\) output multiple integers, choose at random a \(m_1\in \mathrm{arg\,min}_{i\in {\textsf{S}}} \text {Tr}^2(\rho _0 A^\perp _i)/\Vert A^\perp _i\Vert ^2_{HS}\). Set \(\Gamma _{m_1}=A_{m_1}/\Vert A_{m_1}\Vert _{HS}\) and \(k=1\). | |
\(\bullet \) | Step 1: Perform the measurements corresponding to \(A_{m_k}\) and collect the sampled average output \(y_{m_k}\). Update \(\textsf{M}=\textsf{M}\cup \{m_k\}\) and \(\textsf{S}=\textsf{S}\setminus \{m_k\}\). Compute |
\(\omega _k{:}{=} \textrm{min}_{\rho \in \bigcap _{n\in \textsf{M}}\bar{\textbf{S}}_n}D(\rho ,\rho _0), \Omega _k{:}{=} \max _{\rho \in \bigcap _{n\in \textsf{M}}\bar{\textbf{S}}_n}D(\rho ,\rho _0).\) | |
− If \(\omega _k>\epsilon \), then \(\rho _{\textrm{exp}}\) is not \((\epsilon ,D,\rho _0)\)-accurate and stop the procedure; | |
− If \(\Omega _k \le \epsilon \), then \(\rho _{\textrm{exp}}\) is \((\epsilon ,D,\rho _0)\)-accurate and stop the procedure; | |
− Otherwise, set \( \rho _k\in \mathrm{arg\,min}_{\rho \in \bigcap _{n\in \textsf{M}}\bar{\textbf{S}}_{n}}f_{\rho _0}(\rho ). \) | |
\(\bullet \) | Step 2: Collect all \(i\in \textsf{S}\) such that \(A_i\notin \text {span}\{A_i\}_{i\in \textsf{M}}\) in \(\bar{\textsf{S}}\). For all \(i\in \bar{\textsf{S}}\), compute |
\( \delta ^k_i {:}{=}\textrm{min}_{\rho \in \textbf{S}_{i}(\rho _{k}) \cap \bigcap _{n\in \textsf{M}}\bar{\textbf{S}}_{n} } D(\rho ,\rho _0)\), | |
\(\Delta ^k_i {:}{=}\max _{\rho \in \textbf{S}_{i}(\rho _{k}) \cap \bigcap _{n\in \textsf{M}}\bar{\textbf{S}}_{n}} D(\rho ,\rho _0), \) | |
where \(\textbf{S}_i(\rho _k)=\{\rho \in {\mathcal {S}}({\mathcal {H}}_d)|\,\text {Tr}(\rho A_i)=\text {Tr}(\rho _k A_i)\}\). | |
If \(\delta ^k_i=0\) for all \(i\in \textsf{S}\), compute \(\mathrm{arg\,min}_{i\in \bar{\textsf{S}}} \Delta ^{k}_i\). | |
− If \(\mathrm{arg~min}\) outputs a single integer, set \(m_{k+1}=\mathrm{arg\,min}_{i\in \bar{\textsf{S}}} \Delta ^{k}_i \); | |
− If \(\mathrm{arg~min}\) outputs multiple integers, compute \(A_j^\perp =A_j-\sum _{i\in \textsf{M}}{\textrm{Tr}}(A_j\Gamma _i)\Gamma _i\) for all \(j\in \bar{\textsf{S}}\) and choose at random \(m_k\in \mathrm{arg\,max}_{j\in \bar{\textsf{S}}}\text {Tr}^2(\rho _0 A^\perp _j)/\Vert A^\perp _j\Vert ^2_{HS}.\) Set \(\Gamma _{m_k}=A_{m_k}^\perp /\Vert A_{m_k}^\perp \Vert _{HS}.\) | |
Otherwise, compute | |
\(\mathrm{arg\,min}_{i\in \bar{\textsf{S}}} \big \{\textrm{min}\{\epsilon -\delta ^{k}_i,\Delta ^{k}_i-\epsilon \} \big \}.\) | |
− If \(\mathrm{arg~min}\) outputs a single integer, set \(m_{k+1}=\mathrm{arg\,min}_{i\in \bar{\textsf{S}}} \big \{\textrm{min}\{\epsilon -\delta ^{k}_i,\Delta ^{k}_i-\epsilon \} \big \}\); | |
− If \(\mathrm{arg~min}\) outputs multiple integers, compute \(A_j^\perp =A_j- \sum _{i\in \textsf{M}}{\textrm{Tr}}(A_j\Gamma _i)\Gamma _i\) for all \(j\in \bar{\textsf{S}}\) and choose at random \(m_k\in \mathrm{arg\,max}_{j\in {\bar{\textsf{S}}}}\text {Tr}^2(\rho _0 A^\perp _j)/\Vert A^\perp _j\Vert ^2_{HS}.\) Set \(\Gamma _{m_k}=A_{m_k}^\perp /\Vert A_{m_k}^\perp \Vert _{HS}.\) | |
Update \(k=k+1\). | |
\(\bullet \) | Step 3: Repeat Step 1 and Step 2 until \(\text {card}(\textsf{M})=d^2\). |
Due to the perfect measurements, we can always obtain the verification results when the above algorithm ends. In Step 2, we specifically consider the case \(\delta ^k_i=0\) for all \(i\in \textsf{S}\), in which \(\rho _0\) belongs to the compatible sets, i.e., C1 is always verified. Thus, we can only apply C2 to determine the next measurement. If \(\rho _{\textrm{exp}}=\rho _0\), in Step 1 of Algorithm 4, we have \(\rho _k \equiv \rho _0\) for any \(k\in \{1,\dots ,R\}\) since \(\rho _0=\mathrm{arg~min}_{\rho \in {\mathcal {S}}({\mathcal {H}}_d)} f_{\rho _0}(\rho )\), which implies \(\delta ^k_i\equiv 0\). Thus, in this case, Algorithm 4 is equivalent to the combination of Algorithm IOS and Algorithm IAS.
Note that Algorithm AV can also be applied to the imperfect measurement case. However, if the sample size is not big enough or there are errors and bias, one may obtain \(\bigcap ^K_{i=1}\hat{\textbf{S}}_{m_i}=\emptyset .\) In this case, we need to stop the verification process and re-measure \(\rho _{\textrm{exp}}\).
5 Application: two-qubit systems
In the following, we test the proposed algorithm simulating measurements to verify the accuracy of preparation of randomized pure states in a two-qubit system. We summarize the key elements of the numerical experiments we ran.
Target states: According to the normal distribution, we pick 100 sets of 4 independent complex random numbers with real and imaginary parts belonging to \([-100,100]\), i.e., \(|\psi _i\rangle \in {\mathbb {C}}^4\) with \(i=1,\dots ,100\). Then, we generate 100 pure target states by \(\rho _{0,i}=\frac{|\psi _i\rangle \langle \psi _i|}{\text {Tr} (|\psi _i\rangle \langle \psi _i|)}\).
Bures distance: The distance we employ is the Bures distance, which reduces to \(d_B(\rho ,\rho _0)=\sqrt{2(1-\sqrt{F(\rho ,\rho _0)})}=\sqrt{2(1-\sqrt{\text {Tr}(\rho \rho _0)})}\) for the case of \(\rho _0\) being a pure state. Obviously, \(d_B(\rho ,\rho _0)\) is strictly monotonically decreasing with respect to \(\text {Tr}(\rho \rho _0)\). Due to the linearity, we can apply the convex optimization (CVX-SDP [16]) in the simulation for searching the minimum and maximum value of \(\text {Tr}(\rho \rho _0)\) under constraints.
Accuracy: \(\epsilon = \sqrt{2(1-\sqrt{{\tilde{\epsilon }}})}\), where \(\tilde{\epsilon }\) is the desired precision for the fidelity \(\text {Tr}(\rho \rho _0)\). We consider \(\tilde{\epsilon }=0.95\) so that \(\epsilon = 0.2250\).
Measurements: We apply projective measurements into Pauli eigenstates. Let \(\Pi _1\dots \Pi _6\) be the eigenprojectors of Pauli matrices corresponding to the eigenvalue 1 and \(-1\), respectively, i.e., \(\sigma _x \Pi _1 = \Pi _1, \sigma _x \Pi _2 = -\Pi _2, \dots , \sigma _z \Pi _6 = -\Pi _6\). We denote by \(A_{6(i-1)+j}=\Pi _i\otimes \Pi _j\) with \(i,j\in \{1,\dots ,6\}\) the 36 observables for the two-qubit system. The set of observables \(\{A_i\}^{36}_{i=1}\) is information-complete.
Generated state: We generate 100 full rank \((\epsilon ,d_B,\rho _{0,k})\)-accurate states \(\rho ^{a}_{\textrm{exp},k}\) and 100 full rank \((\epsilon ,d_B,\rho _{0,k})\)-non-accurate states \(\rho ^{n}_{\textrm{exp},k}\) by perturbing the target state \(\rho _{0,k}\) with \(k\in {1,\dots ,100}\) as
where \(\lambda \in (0,1)\), \(\eta >0\) and \(H_k\) are random Hermitian matrix. We generate the random \(H_k\in {\mathcal {B}}_*({\mathbb {C}}^4)\) in the following way and express \(H_k = \sum ^{15}_{j=0}h_{j,k}\Gamma _n\) where \(\Gamma _0 = \mathbb {1}_4\) and \(\{\Gamma _j\}^{15}_{j=1}\) are generators of the Lie algebra \(\text {SU}(4)\) satisfying \(\text {Tr}(\Gamma _j)=0\) and \(\text {Tr}(\Gamma _m \Gamma _j)=2\delta _{jm}\) with \(j,m\in \{1,\dots ,15\}\), \(\{h_{j,k}\}^{16}_{n=0}\) are random scalars drawn from the uniform distribution in the intervals \((-1,1)\). We set \(\eta = 0.1\), \(\lambda = 0.0001\) for the accurate case and \(\lambda = 0.1\) for the non-accurate case.
5.1 Before measurements: Algorithm IOS versus Algorithm IAS
Algorithm IOS: We use CVX-SDP mode to apply semidefinite programming, and obtain 100 measurement sequences, \(\textsf{M}_k=[m_{k,j}]_{ j\le 16}\) for \(k\in \{1,\dots ,100\}\).
Algorithm IAS: We obtain 100 measurement sequences, \(\textsf{R}_k=[r_{k,j}]_{j\le 16}\) for \(k\in \{1,\dots ,100\}\).
Comparison: Based on the measurement sequences \(\textsf{R}_k\) generated by Algorithm IOS, we apply semidefinite programming (CVX-SDP mode) to compute the following
where \([\textsf{R}_k]_l\) denotes the first l elements of \(\textsf{R}_k\). The value \(\beta _{k}\) can be considered as an indicator of how well Algorithm IAS approximates Algorithm IOS. The upper diagram of Figure 3 draws error bars of \(\beta _{k}-\alpha _{k}\) which represents the mean value and standard deviation, where \(\alpha _{k}\) are defined in Algorithm IOS; the lower diagram draws the number of measurements required by Algorithm IAS minus the one required by Algorithm IOS for reconstructing \(\rho _{0,k}\). Taking the machine precision into account, reconstruction of \(\rho _{0,k}\) means \(d_B(\rho ,\rho _{0,k})\le 10^{-6}\) for \(\rho \) belonging to the compatible set. For 100 target states \(\rho _{0,k}\), the mean values and the standard deviations of the number of measurements required by Algorithm IOS and Algorithm IAS for the reconstruction are (5.69, 0.5449) and (6.47, 0.6884), respectively. It is worth noting that more measurements are required by Algorithm IOS than Algorithm IAS in few cases, since Algorithm IOS does not always provide the optimal measurement sequence, being itself an approximation of Algorithm OS.
5.2 Accurate \(\rho _{\textrm{exp}}\): Algorithm IOS versus Algorithm IAS versus Algorithm AV versus control groups
Control groups: Since the set of measurements considered here is information-overcomplete, we generate 5 random measurement sequences for each accurate generated state \(\rho ^{a}_{\textrm{exp},k}\), and every sequence contains 16 linearly independent observables.
Numerical Test: We apply the verification protocol (Algorithm VM) on the measurement sequences generated off-line by Algorithm IOS, Algorithm IAS and randomized control groups and run the adaptive protocol (Algorithm AV) with \(f_{\rho _0}(\rho )=d_B(\rho ,\rho _0)\).
Remark 4
In the case of multiple measurements with the same index of merit, Algorithm IAS selects one measurement at random, while Algorithms IOS and AV use the following rule, inspired by Algorithm IAS: Select an observable at random among those which maximize \(\text {Tr}^2(\rho _1 {A}^{\perp }_{j})/\Vert A_j^\perp \Vert _{HS}^2\). The further optimization step is based on analytic formulas so it is not computationally intensive. The same rule will be used in the next set of simulations as well.
The main results are summarized in Figure 4 and Table 1. The first diagram of Figure 4 shows the histogram of the number of measurements required for the verification of accuracy by Algorithm IOS, Algorithm IAS, Algorithm AV and control groups. This diagram and Table 1 confirm the efficiency of our algorithms in verification of accuracy. The rest diagrams show the histogram of difference of the number of measurements required by different algorithms. In this situation, Algorithm IOS exhibits an advantage with respect to Algorithm IAS. In this case, the performance of Algorithm AV is almost equal to Algorithm IOS. This results are not surprising: When the state to be verified is indeed close to the target one, Algorithm IOS is expected to provide the best iteratively built sequence. Nonetheless, Algorithm IAS performance is fairly close (one extra measurement operator is needed on average) and has the advantage of avoiding iterated optimization procedures as it relies only on analytic formulas.
Remark 5
It is worth noticing that the performance of Algorithm AV strongly depends on the choice of the function \(f_{\rho _0}\). Here, we only consider the basic choice \(f_{\rho _0}(\rho )=d_B(\rho ,\rho _0)\): The optimization of \(f_{\rho _0}\) will be the focus of the future work.
The first histogram displays the distribution of the number of measurements required to verify the accuracy of \(\rho ^{a}_{\mathrm{{exp}},k}\) for \(k=1,\dots ,100\). The other three show the distribution of the difference between the lengths of the sequences of two algorithms for the same set of generated measurements: For example, if the displayed \(N(\text {Alg. X})-N(\text {Alg. Y})\) is negative, it indicates an advantage for \(\text {(Alg. X)}\)
5.3 Non-accurate \(\rho _{\textrm{exp}}\): Algorithm IOS versus Algorithm IAS versus Algorithm AV versus control groups
Control groups: We generate 5 random measurement sequences for each non-accurate generated state \(\rho ^{n}_{\textrm{exp},k}\), and every sequence contains 16 linearly independent observables.
Numerical Test: We apply the verification protocol (Algorithm VM) on the measurement sequences generated off-line by Algorithm IOS, Algorithm IAS and randomized control groups and also run the adaptive protocol (Algorithm AV) with \(f_{\rho _0}(\rho )=d_B(\rho ,\rho _0)\).
The main results are summarized in Figure 5 and Table 2. The first diagram of Figure 5 shows the histogram of the number of measurements required for the verification of non-accuracy by Algorithm IOS, Algorithm IAS, Algorithm AV and control groups. This diagram and Table 2 confirm the efficiency of our algorithms in verification of non-accuracy with respect to random sequences. The rest diagrams show the histogram of difference of the number of measurements required by different algorithms. We can observe that the performance is similar, with a slight advantage for the adaptive protocol, Algorithm AV. Other numerical experiments indicate that the difference in performance becomes more relevant if the needed number of measurements grows.
The first histogram displays the distribution of the number of measurements required to verify the non-accuracy of \(\rho ^{n,1}_{\mathrm{{exp}},k}\) for \(k=1,\dots ,100\). The other three show the distribution of the difference between the lengths of the sequences of two algorithms for the same set of generated measurements
6 Conclusions
In this work, we define and study quantum state verification, a key task to test the effectiveness of quantum state preparation procedures, quantum communication channels, quantum memories and a variety of quantum control algorithms.
Assuming that i.i.d. copies of the system can be produced, the resulting state can be identified by tomographic techniques: Sampled averages of a basis of observables are sufficient to determine an estimate of the state and thus to decide if it is compatible with given accuracy requirements. We propose improved strategies to select the observables to be measured, so that a decision on the accuracy of the preparation can be reached well before the full set of measurement is completed. The protocols rely on prior information about the target state and either provide a full ordered list of observables to be performed or adaptively decide the next observable based on the previously obtained ones.
Let us briefly compare the sample complexity of our method with the verification techniques that optimize the measurements with respect to the target state. In [3], by tuning the measurements for a specific target state \(\rho _0\), Proposition 16 and Proposition 20 established that we need sample complexity \(m>1/\epsilon \) to guarantee:
In our strategy, the errors in the algorithms emerge from the substitution of the exact expected output for measuring the observable \(A_{m_k}\), namely \(y_{m_k}\) with the empirical mean estimation using n-measurements, expressed as \(Y_{m_k} {:}{=} 1/n \sum ^n_{i=1} A^{(i)}_{m_k}, \) where \(A^{(i)}_{m_k}\) represents the outcome of the i-th measurement of \(A_{m_k}\) in the state \(\rho _{exp}\). Consequently, the primary errors in our paper come from \(|Y_{m_k} - y_{m_k}|\), whose scaling can be characterized as in Proposition 10 of [3]: The latter states that
for sample size \(m>k/\epsilon ^2 \log (2/\delta ).\) In our scheme, these errors may sum over a finite number of measurements, thus maintaining the \(1/\epsilon ^2\) scaling.
Note that in the first case \(\epsilon \) denotes the distance with respect to perfect fidelity, while in our case is the distance between true and estimated averages. However, the two can be related using standard quantum information bounds, obtaining the same scaling.
While our approach unfavorably scales as a linear function of \(1/\epsilon ^2\), all strategies obtaining \(1/\epsilon \) scaling rely on the ability of tuning the measurements for the specific target. Here, on the other hand, we are limited to a fixed, finite set of general measurements, a situation motivated by typical experimentally-available setups.
Numerical tests indicate that, for example, a fidelity of 0.95 can be tested on a qubit system with just 5 measurement of joint Pauli operators, when using randomized sequences requires at least 8. While the solution of the problem leads to solve and compare multiple optimization problems, we also propose an iterative, suboptimal algorithm whose solution can be computed analytically, based on a geometric approximation of the set of states compatible with given measurement outcomes. The adaptive strategy holds an advantage, especially when the needed number of measurements grows, albeit it requires a more involved implementation. Further work will address the use of optimized measurement sequences for fast tomography, the use of different distance functions for the adaptive strategies and the application to real data from experimental systems of interest.
References
Eisert, J., Hangleiter, D., Walk, N., Roth, I., Markham, D., Parekh, R., Chabaud, U., Kashefi, E.: Quantum certification and benchmarking. Nat. Rev. Phys. 2(7), 382–390 (2020)
Paris, M., Rehacek, J.: Quantum State Estimation, vol. 649. Springer, Berlin/Heidelberg (2004)
Kliesch, M., Roth, I.: Theory of quantum system certification. PRX Quantum 2, 010201 (2021)
Pallister, S., Linden, N., Montanaro, A.: Optimal verification of entangled states with local measurements. Phys. Rev. Lett. 120(17), 170502 (2018)
Li, Y., Zhang, H., Li, Z., Zhu, H.: Minimum number of experimental settings required to verify bipartite pure states and unitaries. Phys. Rev. A 104, 062439 (2021)
Wang, K., Hayashi, M.: Optimal verification of two-qubit pure states. Phys. Rev. A 100(3), 032315 (2019)
Dangniam, N., Han, Y.-G., Zhu, H.: Optimal verification of stabilizer states. Phys. Rev. Res. 2(4), 043323 (2020)
Yu, X.-D., Shang, J., Gühne, O.: Optimal verification of general bipartite pure states. Npj Quantum Inf. 5(1), 112 (2019)
Jiang, X., Wang, K., Qian, K., Chen, Z., Chen, Z., Lu, L., Xia, L., Song, F., Zhu, S., Ma, X.: Towards the standardization of quantum state verification using optimal strategies. Npj Quantum Inf. 6(1), 90 (2020)
Li, Z., Han, Y.-G., Zhu, H.: Optimal verification of greenberger-horne-zeilinger states. Phys. Rev. Appl. 13(5), 054002 (2020)
Liu, Y.-C., Yu, X.-D., Shang, J., Zhu, H., Zhang, X.: Efficient verification of dicke states. Phys. Rev. Appl. 12(4), 044020 (2019)
Liu, Y.-C., Shang, J., Yu, X.-D., Zhang, X.: Efficient verification of quantum processes. Phys. Rev. A 101(4), 042315 (2020)
Gross, D., Liu, Y.-K., Flammia, S.T., Becker, S., Eisert, J.: Quantum state tomography via compressed sensing. Phys. Rev. Lett. 105, 150401 (2010)
Zorzi, M., Ticozzi, F., Ferrante, A.: Minimum relative entropy for quantum estimation: feasibility and general solution. IEEE Trans. Inf. Theory 60(1), 357–367 (2013)
Nielsen, M.A., Chuang, I.: Quantum Computation and Quantum Information. Cambridge University Press, Cambridge (2000)
Grant, M., Boyd, S.: CVX: Matlab Software for Disciplined Convex Programming, version 2.1. http://cvxr.com/cvx (2014)
Acknowledgements
F.T. and G.V. acknowledge partial funding from the European Union - NextGenerationEU, within the National Center for HPC, Big Data and Quantum Computing (Project No. CN00000013, CN 1, Spoke 10) and from the European Union’s Horizon Europe research and innovation programme under the project “Quantum Secure Networks Partnership” (QSNP, grant agreement No. 101114043). Views and opinions expressed are, however, those of the author(s) only and do not necessarily reflect those of the European Union or European Commission-EU. Neither the European Union nor the granting authority can be held responsible for them.
Funding
Open access funding provided by Università degli Studi di Padova within the CRUI-CARE Agreement.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflict of interest
The authors have no relevant financial or non-financial interests to disclose. The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Liang, W., Ticozzi, F. & Vallone, G. Optimizing measurements sequences for quantum state verification. Quantum Inf Process 22, 419 (2023). https://doi.org/10.1007/s11128-023-04167-y
Received:
Accepted:
Published:
Version of record:
DOI: https://doi.org/10.1007/s11128-023-04167-y







