1 Introduction

One of the tasks where quantum computers outperform classical computers is the search of marked elements in unsorted lists, thanks to Grover’s algorithm [1]. Quantum abstract detecting systems (QADS) [2] provide a general quantum computing framework to address detection problems, and so they generalize Grover and various other quantum algorithms (Deutsch–Jozsa algorithm [3], the quantum abstract search [4] and several quantum walks [5,6,7,8]). QADS also allow us to combine them in order to improve their accuracy. In addition, this paradigm has potential applications whenever the existence of a marked element must be found, for example, checking the commutativity of finite dimensional algebras [9]. A similar approach was explored for the detection of undesired measurements in a circuit [10].

The family of combinatorial QADS [11], which control one prefixed QADS with some extra qubits, is particularly interesting. They can not only be used in the original detecting setting, but also in some other practical problems, such as a decision algorithm for an eigenvalue of a unitary matrix or, most importantly, phase estimation by the Hadamard test. Namely, the latter approximates an eigenvalue of a given unitary matrix. A well-known method to do this is the quantum phase estimation algorithm [12], but the Hadamard test achieves it in a simpler way but with less precision, and it can be generalized with combinatorial QADS.

In this work, we introduce functional QADS, which extend the family of combinatorial QADS. Theoretically, they have some interesting properties. First, they have a \(\delta \)-detecting time if it is based on a QADS that has one. Also, the QFT operation can be reinterpreted as a product of Hadamard gates and geometric QADS (a particular type of functional QADS).

On the other hand, we will consider the previous combinatorial QADS-based algorithms for functional QADS, aiming to improve the probabilities of error or the efficiency. We will find that combinatorial QADS are outperformed in both aspects. Optimality of functional QADS will be shown for the decision algorithm (the rest of the methods are based on it). In addition, a whole new phase estimation algorithm is going to be presented, based on a specific type of functional QADS, which will show a much more efficient and accurate performance than the generalized Hadamard test. Its circuit is similar to that of the quantum phase estimation (QPE) one, but avoiding the use of the QFT, which has an expensive implementation.

In summary, the practical problems that are going to be addressed are:

  1. 1.

    Decision on eigenvalues: given a unitary matrix U, an eigenvector \(|\varphi _0\rangle \) with eigenvalue \(e^{i\beta }\) and an angle \(\alpha \), decide whether \(\beta = \alpha \). The study of this problem will establish the theoretical basis for the following problems.

  2. 2.

    Decision on an interval: given a unitary matrix U, an eigenvector \(|\varphi _0\rangle \) with eigenvalue \(e^{i\beta }\) and an interval \([\alpha - \delta , \alpha + \delta ]\), decide whether \(\beta \in [\alpha - \delta , \alpha + \delta ]\).

  3. 3.

    Phase estimation problem with a confidence interval: given a unitary matrix U and an eigenvector \(|\varphi _0\rangle \) with eigenvalue \(e^{i\beta }\), find a confidence interval for \(\beta \). In particular, the following algorithms will be developed and compared:

    • Dichotomy search (for a given error in the estimation)

    • Generalized Hadamard test (for a given level of confidence; only for combinatorial QADS)

    • \(\delta \)-approximation algorithm (for a given error in the estimation)

After the introduction of the \(\delta \)-approximation algorithm, we will show some considerations about it that illustrate its potential. This includes comparing it with the QPE and other similar phase estimation methods, showing a promising performance in terms of reducing the number of qubits and operations. In addition, it offers the possibility of improving an estimation given by any other method. Also, we address how an error in the preparation of the initial state evolves in the decision algorithm.

This paper is divided in 6 sections. Section 2 contains the basic definitions and results used in our study. Section 3 will introduce m-functional QADS and their basic properties, paying special attention to geometric QADS. The practical applications and description of the proposed phase estimation method will be addressed in Sect. 4. Some first insights into the comparison with other phase estimation methods and the propagation of errors will be explored Sect. 5. Finally, a summary of the conclusions will be given in Sect. 6.

2 Preliminaries

We will summarize the basic results about QADS needed in our study of functional QADS. A QADS is a procedure meant to detect the existence of marked elements in a given set. This is achieved by an operator that fixes an initial state when the element is marked. Grover’s algorithm is an example of QADS.

Hence, we define a QADS \(\mathcal {Q}\) as any (classical deterministic) algorithm that takes, from a set of inputs \(\mathcal {M}\), a boolean function (given by a circuit) \(f: \{0, 1\}^k \longrightarrow \{0, 1\}\) and outputs a unitary transformation \(U = U_f\) on a Hilbert space \(\mathcal {H}\) whose dimension only depends on k, together with a state \(|\varphi _0\rangle \in \mathcal {H}\) (that only depends on k too) such that

$$\begin{aligned} \{ x \in \{0, 1\}^{k} \ | \ f(x) = 1 \} = \emptyset \ \Rightarrow \ U_f |\varphi _0\rangle = |\varphi _0\rangle . \end{aligned}$$

From this definition, a detection scheme was introduced in [2, Main algorithm] for deciding whether the received function f is different from 0 or not, that is, if there exists a marked element in a given set. Namely, the initial state \(|\varphi _0\rangle \) and the detecting operator U from \(\mathcal {Q}\) on the input f are precomputed; later, t is uniformly chosen from \(\{ 0, 1, \ldots , T \}\), for a fixed value T, and \(U^t |\varphi _0\rangle \) is measured on an orthonormal basis containing \(|\varphi _0\rangle \). If the result is \(|\varphi _0\rangle \), the decision \(f \equiv 0\) is made; otherwise \(f \not \equiv 0\).

The performance of a QADS in this algorithm is studied through the concept of \(\delta \)-detecting time, that characterizes the probability of error of the detecting scheme when \(f \not \equiv 0\) (when \(f \equiv 0\), the detection scheme never fails). If \((|\varphi _0\rangle , U = U_f)\) denotes the output of a QADS on input \(f \in \mathcal {M}\), then for a given \(0 < \delta \le 1\), a function \(T: \mathbb {N} \longrightarrow \mathbb {N}\) is a \(\delta \)-(quantum) detecting time for the QADS, if for all nonzero \(f \in \mathcal {M}\) of input size k

$$\begin{aligned} \dfrac{\sum \limits _{t = 0}^{T(k)} \left| \langle \varphi _0| U^t |\varphi _0\rangle \right| ^2}{T(k) + 1} \le 1 - \delta . \end{aligned}$$

Theorem

[2, Main theorem] The detection scheme of the main algorithm always provides a correct output on input zero (i.e., when no marked elements do exist), and so the probability of error is fully attributed to nonzero inputs. Namely, such a probability is equal to

$$\begin{aligned} \dfrac{\sum \limits _{t = 0}^{T(k)} \left| \langle \varphi _0| U^t |\varphi _0\rangle \right| ^2}{T(k) + 1}. \end{aligned}$$

\(\delta \)-detecting times allow to bound the probability of error of the detection scheme. For example, Grover’s algorithm has a \(\frac{\sqrt{2} - 1}{4\sqrt{2}}\)-detecting time, of order \(O(\sqrt{2^k})\).

Subsequently, m-combinatorial QADS were introduced in [11], which are based on a fixed QADS, adding m extra qubits to it in order to control the application of its detecting operator \(U_f\). If \(|\varphi _0\rangle \) is its initial state and m is a positive integer, then the m-combinatorial QADS obtained from \(\mathcal {Q}\) is the QADS whose initial state is \(|0\rangle ^{\otimes m} |\varphi _0\rangle \), and whose detecting operator is given by

$$\begin{aligned} C(m, U_f):= (H^{\otimes m} \otimes I) c_m U_f... c_1 U_f (H^{\otimes m} \otimes I), \end{aligned}$$

where \(c_i U_f\) is the operator \(U_f\) controlled by the i-th qubit of the first register. Its circuit is shown in Fig. 1.

Fig. 1
figure 1

Combinatorial QADS circuit

The final state of the circuit is given in the next result.

Proposition

[11, Proposition 2] The amplitude of the state \(C(m, U_f) |0\rangle ^{\otimes m} |\varphi _0\rangle \) related to the basis state \(|0\rangle ^{\otimes m} |\varphi _0\rangle \) is

$$\begin{aligned} \frac{1}{2^m} \sum _{k=0}^{m} \left( {\begin{array}{c}m\\ k\end{array}}\right) \langle \varphi _0| U^k |\varphi _0\rangle . \end{aligned}$$

Apart from the intended detecting nature of QADS, combinatorial QADS have other applications, among them (see [11]):

Decision on eigenvalues Let U be a unitary matrix, and let \(|\varphi _0\rangle \) be a state under the promise that \(U |\varphi _0\rangle = e^{i \beta } |\varphi _0\rangle \). Then, for a given \(\alpha \), we want to decide whether \(\alpha = \beta \). The problem can be solved by implementing C(mV), where \(V = e^{-i \alpha } U\). If the result of a final measure is \(|0\rangle ^{\otimes m} |\varphi _0\rangle \) (the initial state), we conclude \(\beta = \alpha \). Otherwise, we conclude \(\beta \not = \alpha \).

Theorem

[11, Theorem 3] The decision on eigenvalues algorithm is always correct when it outputs NO. So, the probability of error is fully attributed to a YES answer. Namely, such a probability is equal to

$$\begin{aligned} \cos ^{2m} \left( \frac{\beta - \alpha }{2} \right) . \end{aligned}$$

In this paper, we will study the behaviour of this algorithm for functional QADS, and find the optimal ones.

Dichotomy search. With the previous notation, a small interval containing \(\beta \in [0,\pi ]\) is to be found. The interval \([0, \pi ]\) is split into two halves, finding the one half where \(\beta \) is more likely to be. The repetition of this process as many times as desired yields the desired small interval.

In this paper, we will take a closer look to this algorithm under the paradigm of functional QADS.

m-Hadamard test It is the generalization of the Hadamard test, a phase estimation algorithm that approximates the angle \(\beta \in [0,\pi ]\), under the previous notation. It takes advantage from the fact that the formula \(\cos ^{2\,m} \left( \frac{\beta }{2} \right) \) is easily invertible. So, n independent measurements of the final state of the \(m-\)combinatorial QADS gives \(\beta \approx \arccos (2 \root m \of {\hat{p}_n} - 1)\), where \(\hat{p}_n\) is the proportion of \(|0\rangle ^{\otimes m} |\varphi _0\rangle \) states obtained from the decision algorithms. It has been concluded that \(m=1\) provides the most balanced version of this method for an unknown \(\beta \), in the sense that increasing m would improve the accuracy when \(\beta \approx 0\) but make it worse when \(\beta \approx \pi \).

In this paper, these last two algorithms will be considered from functional QADS in order to deal with phase estimation.

3 m-Functional QADS definition and first results

In this section, we provide the formal definition of m-functional QADS, which extends the idea of combinatorial QADS introduced in [11]. The m-functional QADS are built from another QADS, by a control of the original detecting operator \(U_f\) by a superposition of qubits and by a function g as shown in Fig. 2.

Fig. 2
figure 2

Functional QADS circuit

Definition 1

If \(U_f\) is the detecting operator of a QADS \(\mathcal {Q}\), \(|\varphi _0\rangle \) its initial state, m is a positive integer and \(g: \mathbb {N} \longrightarrow \mathbb {Q}\) a function (where \(0 \in \mathbb {N}\)), we define the m-functional QADS obtained from \(\mathcal {Q}\) as the QADS with initial state \(|0\rangle ^{\otimes m} |\varphi _0\rangle \) and detecting operator

$$\begin{aligned} F(m, U_f, g):= (H^{\otimes m} \otimes I) c_m U_f^{g(m-1)}... c_1 U_f^{g(0)} (H^{\otimes m} \otimes I), \end{aligned}$$

where \(c_i U_f\) is the unitary operator that applies \(U_f\) to the second register if the i-th qubit of the first register is \(|1\rangle \), and applies the identity if that qubit is \(|0\rangle \) (i.e., it is the operator \(U_f\) controlled by the i-th qubit of the first register). The size of an m-functional QADS is defined as the number of times that \(U_f\) is applied, that is, \(G=\sum _{n=0}^{m-1} g(n)\).

We let g output rational numbers in order to allow powers and roots of QADS. The m-combinatorial QADS are a particular case of m-functional QADS when \(g(n) = 1\), so \(C(m, U_f) = F(m, U_f, 1)\).

Before getting into the application of functional QADS considered in this paper, which is phase estimation, it is reasonable to study the basic properties of functional QADS, as well as their performance for the original QADS purpose. We first confirm that m-functional QADS are indeed QADS and, then, we also consider under which algorithmic operations, of those in the algorithmic closure of a QADS, (collected in Table 1, as taken from [2]), the familiy of \(m-\)functional QADS is closed. Proofs of these facts, and several others along the paper can be found in 1.

Table 1 Transformations in the algorithmic closure of a QADS

Proposition 1

Every m-functional QADS is indeed a QADS.

Proposition 2

  • Extension, inversion, powers and roots of m-functional QADS are also m-functional QADS.

  • The product of m-functional QADS built from the same original QADS is also an m-functional QADS.

  • The product of m-functional QADS built from different original QADS is also an m-functional QADS, as long as they share the initial state and the function g, and the detecting operators involved commute with each other.

The next result provides the amplitude of the initial state at the end of the circuit. This will be a key element in order to calculate the probability of error of the detection algorithm, and of the practical applications developed later in the paper. Observe that when \(g=1\), we recover the result for m-combinatorial QADS given in [11].

Theorem 1

Given an m-functional QADS, the amplitude of the state \(F(m, U_f, g) |0\rangle ^{\otimes m} |\varphi _0\rangle \) associated to the basis state \(|0\rangle ^{\otimes m} |\varphi _0\rangle \) is

$$\begin{aligned} \frac{1}{2^m} \sum _{x=0}^{2^m - 1} \langle \varphi _0| U^B_f |\varphi _0\rangle , \end{aligned}$$

where \(B = \sum _{i=0}^{m-1} x_i g(i)\) and \(x = x_0 + 2x_1 +... + 2^{m-1} x_{m-1}\) with \(x_i \in \{0,1\}, \forall i = 0,..., m-1\).

One of the situations in which this amplitude is needed is when determining \(\delta \)-detecting functions. For a QADS, the existence of a \(\delta \)-detecting function, implies the ability of bounding the probability of error of their main detection algorithm. Thus, it is a key feature for a QADS to be used in detection problems providing an error bound. As stated by the following result, under certain conditions, if a fixed QADS has a \(\delta _T\)-detecting time, then any m-functional QADS has its own \(\delta _S\)-detecting time.

Theorem 2

Let \(m > 0\), \(M = 2^m\) and \(G = \sum _{i=0}^{m-1} g(i)\) for a given function g. If \(T:\mathbb {N}\rightarrow \mathbb {N}\) is a \(\delta _T\)-detecting time such that \(1 - \delta _T < 1/M\), and \(T(k) > M - 1\), then any m-functional QADS of size \(G \le \frac{M T(k)}{T(k) + 1 - M}\) will have \(S(k) = \left\lfloor \frac{T(k)}{G} \right\rfloor \) as a \(\delta _S\)-detecting time, where \(\delta _S = 1 - M(1 - \delta _T)\).

It is worth pointing out that \(\frac{M T(k)}{T(k) + 1 - M} \ge \frac{M T(k)}{T(k)} = 2^m\), so in order to guarantee the existence of a \(\delta -\)detecting time, it is enough to ensure that its size is at most \(2^m\).

We are going to focus on three types of functional QADS, which we will introduce now, mainly because of their behaviour on the decision algorithm. Recall that the performance of the practical algorithms later developed in the paper are strongly based on it.

Combinatorial QADS: \(g_c(n) = 1\)

If we consider the function \(g_c\) to be constant and equal to 1, then we obtain the already studied m-combinatorial QADS. They are the simplest and easiest to work with, and they provide an analytically invertible probability of a positive outcome (which provides the approximation of the eigenvalue’s angle \(\beta \), as stated above).

Linear QADS: \(g_l(n) = n + 1\)

\(m-\)functional QADS with \(g_l(n) = n + 1\) will be called m-linear QADS. They have a predictable behaviour and are usually a better option over the combinatorial QADS for certain problems. It is the best functional QADS among those whose probability of a positive outcome for the decision algorithm is always decreasing with respect to \(|\beta - \alpha |\) in \([0, \pi ]\). This is a desirable property in several situations.

Geometric QADS: \(g_g(n) = 2^n\)

\(m-\)geometric QADS are \(m-\)functional QADS with \(g_g(n) = 2^n\), which have proven to be the best choice for all the studied applications, as its performance on the decision algorithm is optimal. We will also prove that the Quantum Fourier Transformation can be nearly completely explained in terms of a product of geometric QADS.

3.1 Geometric QADS

We will dedicate this subsection to geometric QADS (i.e., \(g_g(n) = 2^n\)), due to their overall superior performance for the practical applications seen in the next section. Figure 3 shows its circuit.

Fig. 3
figure 3

Geometric QADS circuit

From the perspective of implementation, it is worth pointing out that, when \(U_f\) is a rotation, the corresponding power \(U_f^{2^{i}}\) is also a rotation (with different angle), so the geometric QADS does not require the application of an exponential number of gates. If \(U_f\) is not a rotation, then, for most practical applications, the fact that the number of qubits needed to obtain an accuracy \(\delta \) increases logarithmically on \(\delta \) counteracts the exponential increasing of applications of \(U_f\).

For geometric QADS, Theorem 1 can be directly rewritten in the following way.

Corollary 1

In the case of an m-geometric QADS, the amplitude of the state \(F(m, U_f, g_g) |0\rangle ^{\otimes m} |\varphi _0\rangle \) related to the basis state \(|0\rangle ^{\otimes m} |\varphi _0\rangle \) is

$$\begin{aligned} \frac{1}{2^m} \sum _{x=0}^{2^m - 1} \langle \varphi _0| U^x_f |\varphi _0\rangle . \end{aligned}$$

This formula will be especially useful in the computation of later probabilities of error, since it yields expressions involving a geometric sum (hence the name given to the QADS).

In addition, let us show that the quantum Fourier transformation circuit can be described by the composition of a series of geometric QADS. If we denote

$$\begin{aligned} {{UROT}}_k = \begin{pmatrix} 1 &{} 0 \\ 0 &{} e^{\frac{2\pi i}{2^k}} \end{pmatrix}, \end{aligned}$$

then the circuit implementing \(QFT|x_1x_2...x_n\rangle \) is like the one in Fig. 4. Because \({{UROT}}_k^2 = {{UROT}}_{k-1}\) and, consequently, \({{UROT}}_k^{2^p} = {{UROT}}_{k-p}\), we can describe the QFT circuit in the way shown in Fig. 5.

Fig. 4
figure 4

QFT circuit

Fig. 5
figure 5

QFT circuit as a sequence of geometric QADS

Observe the appearance of a sequence of partial geometric QADS (with m decreasing) plus an initial application of a \(H^{\otimes n}\) gate. This is because the \({{UROT}}_k\) gates commute with each other. Hence, this proves that

$$\begin{aligned}{} & {} QFT_n = \left( F(0, {{UROT}}_1, g_g) \otimes I^{\otimes n - 1} \right) \left( F(1, {{UROT}}_2, g_g) \otimes I^{\otimes n - 2} \right) \nonumber \\{} & {} \ldots \left( F(n - 1, {{UROT}}_n, g_g) \right) H^{\otimes n}. \end{aligned}$$
(1)

In addition, since the inversion of a geometric QADS is also a geometric QADS, we can obtain an analogous formula for \(QFT_n^\dagger \). This connection illustrates the generality of QADS, and could be helpful for including the QFT gate in the framework and, therefore, in the circuits. This way, algorithms such as the QPE [12, page 224] could be studied through a new perspective and compared to the phase estimation method introduced in the following section.

4 Functional QADS: practical applications

In this section, we consider functional QADS in the context of the phase estimation problem. We aim at an approximation of the phase of an eigenvalue of a unitary matrix U, along with a confidence interval of desired length. Our proposed algorithm solving this problem is based on a chain of several simpler algorithms that will be detailed first. We will begin with an algorithm for a decision problem on eigenvalues and, based on it, we construct a decision algorithm on intervals. An improvement to the latter leads us to the final phase estimation method: the \(\delta \)-approximation algorithm.

4.1 Decision on eigenvalues

In this problem, we are given a unitary matrix U, a state \(|\varphi _0\rangle \), under the promise that \(U |\varphi _0\rangle = e^{i \beta } |\varphi _0\rangle \) for an unknown real number \(\beta \). Then, for a given \(\alpha \in \mathbb {R}\), we shall use a functional QADS-based decision algorithm (DA) that checks whether \(\beta = \alpha \). Since all of the following algorithms are based on this one, it is worth studying it deeply. In this setting, we will see that geometric QADS are the best choice. The algorithm is the following.

Algorithm 1
figure a

Decision on eigenvalues for an m-functional QADS (DA)

Based on this procedure, we can prove a formula for the probability of stating that \(\alpha = \beta \). The proofs of the results of this section can be found in 2.

Theorem 3

Under the promise that \(U |\varphi _0\rangle = e^{i \beta } |\varphi _0\rangle \), given an angle \(\alpha \) and any m-functional QADS for U, the probability of a positive outcome from the decision algorithm is

$$\begin{aligned} \hbox {DA}(m, g, \beta - \alpha ):= \prod _{n=0}^{m-1} \cos ^2 \left( g(n) \frac{\beta - \alpha }{2} \right) . \end{aligned}$$

This theorem, when \(g=1\), yields the analogous result in the case of combinatorial QADS [11, Theorem 3]. Moreover, the following particular result for geometric QADS can be obtained too.

Theorem 4

Under the promise that \(U |\varphi _0\rangle = e^{i \beta } |\varphi _0\rangle \), given an angle \(\alpha \) and an m-geometric QADS for U, the probability of a positive outcome from the decision algorithm when \(\beta \not = \alpha \) is

$$\begin{aligned} \hbox {DA}(m, g_g, \beta - \alpha ) = \frac{1 - \cos (2^m (\beta - \alpha ))}{2^{2m} (1 - \cos (\beta - \alpha ))}. \end{aligned}$$

As a consequence, we have:

Corollary 2

In the previous conditions, \(\hbox {DA}(m, g_g, \frac{2k\pi }{2^m}) = 0\), for any \(k \in \{1,..., 2^m - 1\}\).

The probabilities of stating that \(\alpha = \beta \) of the DA for a fixed \(m=5\) can be seen in Fig. 6. We observe how both the linear and the geometric clearly outperform the combinatorial QADS, being the geometric the best choice, especially when \(\beta \) and \(\alpha \) are close to each other. However, it could be argued that both the linear and the geometric QADS have greater sizes than the combinatorial one. So, despite their better probabilities, the corresponding circuits are not as efficient. To address this, we study the performance of the different functional QADS when a size G is fixed, allowing m to vary from one family to another.

Fig. 6
figure 6

Probability of a positive outcome from the decision algorithm for the combinatorial, linear and geometric QADS and a fixed \(m = 5\)

For a fixed \(m_c\), in the combinatorial QADS, U is applied \(m_c\) times; for a certain \(m_g\), in the geometric QADS, U is applied \(\sum _{n=0}^{m_g-1} 2^n = \frac{1-2^{m_g}}{1-2} = 2^{m_g} - 1\) times; for a certain \(m_l\), in the linear QADS, U is applied \(\sum _{n=0}^{m_l-1} (n+1) = \frac{m_l(m_l + 1)}{2}\) times. Hence, if we fix a size G, then we use U as much as \(m_c = G\), \(m_g = \log _2 (G+1)\), and \(m_l\) such that \(m_l^2 + m_l - 2\,G \le 0\). Obviously, some of them have to be rounded sometimes. The new graph, for a fixed size of 31 operations (\(m_c = 31\), \(m_g = 5\), and \(m_l\) rounded to 7), can be checked in Fig. 7. Although the combinatorial and linear QADS have better performances, the geometric QADS are still best, specially for close values of \(\beta \) and \(\alpha \). Moreover, they use significantly less qubits.

However, the probability is not always lower. We shall prove that, for a fixed m and in average, the probability of a positive outcome for the geometric QADS is always smaller than for any other functional QADS. To do this, it should be noticed that, in the DA setting, only functional QADS with a natural valued function g are worth considering. Negative numbers can be ruled out, since the cosines of DA\((m, g, \beta - \alpha )\) are not affected by a change of sign. Also, since it is desirable that the formula is equal to 1 when \(|\beta - \alpha | = 2\pi \), we need that g(n) is always an integer number. Consequently, we introduce the following definitions.

Fig. 7
figure 7

Probability of a positive outcome from the decision algorithm for the combinatorial, linear and geometric QADS and a fixed size of 31 operations

Definition 2

We say that a natural valued function \(g: \mathbb {N} \longrightarrow \mathbb {N}\) is DA-optimal for a fixed \(m = m_0\) if, for any other natural valued function \(h: \mathbb {N} \longrightarrow \mathbb {N}\),

$$\begin{aligned} \int _0^{\pi } \hbox {DA}(m_0, g, t) \ dt \le \int _0^{\pi } \hbox {DA}(m_0, h, t) \ dt. \end{aligned}$$

An \(m_0\)-functional QADS is DA-optimal for a fixed \(m = m_0\), if its associated function \(g: \mathbb {N} \longrightarrow \mathbb {N}\) is DA-optimal for a fixed \(m = m_0\).

Definition 3

Let \(G_0 > 0\) be a natural number, \(g: \mathbb {N} \longrightarrow \mathbb {N}\) a natural function such that \(\sum _{n = 0}^{m_g -1} g(n) = G_0\) for some \(m_g > 0\). We say g is DA-optimal for a fixed \(G = G_0\) if for any other natural valued function \(h: \mathbb {N} \longrightarrow \mathbb {N}\) such that \(\sum _{n = 0}^{m_h -1} h(n) = G_0\) (for some \(m_h > 0\)),

$$\begin{aligned} \int _0^{\pi } \hbox {DA}(m_g, g, t) \ dt \le \int _0^{\pi } \hbox {DA}(m_h, h, t) \ dt. \end{aligned}$$

An m-functional QADS of size \(G_0\) is DA-optimal, if its associated function \(g: \mathbb {N} \longrightarrow \mathbb {N}\) is DA-optimal for a fixed \(G = G_0\).

Observe that the integrals are defined on the interval \([0, \pi ]\), instead of \([0, 2\pi ]\), because DA(mgt) is symmetrical with respect to \(t = \pi \), when g is natural valued. In this setting, we can prove the following result related to the optimality for a fixed m.

Theorem 5

The m-geometric QADS is DA-optimal for any fixed m. Moreover, among those m-functional QADS which are DA-optimal for m, m-geometric QADS have the smallest size.

Notice that this result captures the average performance of the geometric QADS, and so there might be some functional QADS (for instance, \(g(n) = (n + 1)^2\)), which may have a better behaviour for particular values of \(|\beta - \alpha |\). However, the knowledge of zeros of DA\((m, g_g, t)\) makes geometric QADS easier to handle, and so they will be used henceforth.

On the other hand, the DA-optimality for a fixed size will be studied numerically. Since there are only m-geometric QADS of sizes \(G = 2^m - 1\), we introduce the concept of G-shortened geometric QADS for any other size.

Definition 4

For a given natural \(G > 0\), we define the G-shortened geometric QADS as an m-functional QADS where \(m = \lceil log_2 (G + 1) \rceil \), \(g(n) = 2^n\) when \(n \not = m - 1\) and \(g(m-1) = G - (2^{m-1} - 1)\).

For example, the shortened geometric QADS for \(G = 18\) would feature \(m = 5\) and \(g([0, 4]) = \{ 1, 2, 4, 8, 3 \}\). Shortened geometric QADS are the closest functional QADS to a geometric QADS, for a given size (if \(G = 2^m - 1\) for some m, shortened geometric QADS are m-geometric QADS). Many times, especially when \(G \approx 2^m - 1\), they are DA-optimal for G or close to optimality. For instance, shortened geometric QADS are DA-optimal for every G from 1 to 19, except for \(G = 12\) (in this case, optimality is achieved by a functional QADS with \(g([0, 4]) = \{ 1, 1, 1, 3, 6 \}\)).

4.2 Decision on an interval

The DA can be transformed in order to check whether the eigenvalue \(\beta \) belongs to a given interval \([\alpha - \delta , \alpha + \delta ]\), i.e., \(\alpha \) would approximate \(\beta \) up to an error of \(\delta \). The idea behind this algorithm is to take a decision based on \(P_{\delta }:= \hbox {DA}(m, g, \delta )\) and the estimation of DA\((m, g, \beta - \alpha )\) by a series of independent runs of the functional-based DA, \(P_{\alpha }\). These probabilities can be estimated by the proportion of times that the QADS algorithm gave the initial state as the outcome. If the approximation \(P_{\alpha }\) is greater than \(P_{\delta }\), it will be assumed that the distance between \(\alpha \) and \(\beta \) is lower than \(\delta \), and so \(\beta \in [\alpha - \delta , \alpha + \delta ]\). The details on the design rationale of this and the following algorithms, their actual implementations and performances, can be found in 2. Figures 8 and 9 show the performance of the studied functional QADS. Once again, the geometric QADS is the most efficient, as it inherits its behaviour from the DA.

Fig. 8
figure 8

Probability of error of the decision on an interval algorithm for the combinatorial, linear and geometric QADS for a fixed \(m=5\), sample size of 1000, and \(\delta = 0.01\)

Fig. 9
figure 9

Probability of error of the decision on an interval algorithm for the combinatorial, linear and geometric QADS for a fixed \(G=31\), sample size of 1000 and \(\delta = 0.01\)

4.3 Interval correction (IC)

Since the probability of error of the previous algorithm is 0.5 when \(\beta \) is on one of the endpoints of the interval, an improvement can be made. Its improved version, called IC, will be the base of the \(\delta \)-approximation algorithm. When \(P_{\alpha } \approx P_{\delta }\), we will test which endpoint \(\beta \) is closer to. Suppose, for instance, that \(\beta \) is close to the \(\alpha + \delta \) endpoint. Then, it will be assumed that \([\alpha , \alpha + 2\delta ]\) (an interval of the same length) contains \(\beta \).

The details on the implementation and computation of the error probabilities can be found in 2. Figure 10 shows the behaviour of different functional QADS, giving the geometric QADS the best performance. On the other hand, the differences between the two versions of the algorithm can be seen in Figs. 11, 12 and 13. In all cases, \(\delta \) has been taken equal to 0.01, the sample size is 1000, the fixed number of operations is 31, and the reference probabilities have been optimally chosen. As we can observe, the improvement is remarkable overall. However, the chance of obtaining \(P_\alpha \approx P_\delta \), but deciding the incorrect endpoint afterwards, might make the IC less accurate for low values of \(|\beta - \alpha |\).

Fig. 10
figure 10

Probability of error of the IC for the combinatorial, linear and geometric QADS for a fixed \(G=31\), sample size of 1000, \(\delta = 0.01\), and optimal \(d_1, d_2\)

Fig. 11
figure 11

Comparison of the probability of error of the decision on an interval and IC algorithms for the combinatorial QADS of size \(G=31\), sample size of 1000, \(\delta = 0.01\) and optimal \(d_1, d_2\)

Fig. 12
figure 12

Comparison of the probability of error of the decision on an interval and IC algorithms for the linear QADS of size \(G=31\), sample size of 1000, \(\delta = 0.01\), and optimal \(d_1, d_2\)

Fig. 13
figure 13

Comparison of the probability of error of the decision on an interval and IC algorithms for the geometric QADS of size \(G=31\), sample size of 1000, \(\delta = 0.01\), and optimal \(d_1, d_2\)

4.4 \(\delta \)-Approximation algorithm

The final algorithm provides an approximation of the angle \(\beta \), with a certain given error \(\delta \), and it works for any \(\beta \in [0, 2\pi ]\) (an advantage over some other versions based on combinatorial QADS). The algorithm is as follows: An initial choice \(\delta _0>> \delta \) is taken, for instance, \(\delta _0=10^2\delta \). For different approximations \(\alpha _0\in [0,2\pi ]\) of the angle \(\beta \), the IC is run until the decision \(\beta \in [\alpha _0 - \delta _0, \alpha _0 + \delta _0]\) is taken for a certain \(\alpha _0\). In the next step of the algorithm, \(\delta _{1}=\frac{\delta _{0}}{10}\), \(\alpha _{1} = \alpha _0\) initially and we run the IC until it decides that \(\beta \in [\alpha _{1} - \delta _{1}, \alpha _{1} + \delta _{1}]\) for a certain \(\alpha _1\). In the last step of the algorithm, \(\delta _{2} = \frac{\delta _{1}}{10} = \delta \) and \(\alpha _{2}\) is chosen analogously until the IC decides that \(\beta \in [\alpha _{2} - \delta _{2}, \alpha _{2} + \delta _{2}]\) for a certain \(\alpha _2\).

An important detail to notice is that this algorithm starts by approximating \(\beta \) with low precision with the Hadamard test (which will be discussed soon, in Sect. 5.1) before checking intervals with the IC. A more detailed account of this algorithm is given in 2. However, any other phase estimation method can be applied for that first approximation. This means that the \(\delta \)-approximation algorithm may offer the possibility of improving the estimation of any phase estimation method without investing a lot of qubits or measurements, as we will see in the next section.

We will stick to the Hadamard test in this article due to its direct relation to functional QADS, but more variations will be explored in the future.

With respect to the asymptotic behaviour, it was run for values of \(\delta \) between \(1/2^6\) and \(1/2^{14}\) in order to observe how its maximum number of ancilla qubits, size and depth evolve for \(p = 1/\delta \). The maximum m used for \(p = 2^k\) was \(k+2\), suggesting its \(O(\log _2 4p)\) behaviour. The evolution of the size and depth can be observed in Fig. 14, concluding that the size evolves similarly to \(O(p \log ^2 (\log p))\), whereas the depth (assuming that powers of U are efficiently implementable) showed a similar behaviour to \(O(\log ^3 p)\). Particularly for the size, a clear stepped behaviour is observed, where changes of step coincide with the need of a new ancilla qubit.

Fig. 14
figure 14

Asymptotic behaviour of the size and depth of the \(\delta \)-approximation method. The constants selected so that the three functions were equal for the first dot (\(p = 2^6\)). We also assume all repetitions of QADS to be done sequentially

However, there are some parameters in this algorithm that are yet to be optimized, so this asymptotic behaviour is just illustrative. Its theoretical behaviour will be explored in future works.

5 Other relevant considerations about the \(\delta \)-approximation algorithm

5.1 Comparison with other functional QADS-based methods

A natural starting point for comparing the \(\delta \)-approximation algorithm would be against other phase estimation methods that were previously studied for combinatorial QADS [11] which provide confidence intervals. We will introduce two of them first.

Confidence interval for the m-Hadamard test The m-Hadamard test was introduced in [11]. With the same notation as above, its aim is an approximation of \(\beta \). Next, we compute a confidence interval for the angle \(\beta \) based on combinatorial QADS. In this test, we made n independent repetitions of a combinatorial QADS experiment for the matrix U, and approximate \(\beta \) using the fact that the probability of obtaining \(|\varphi _0\rangle \) at the end of the experiment is

$$\begin{aligned} p = \cos ^{2m} \frac{\beta }{2} = \left( \frac{1 + \cos \beta }{2} \right) ^m. \end{aligned}$$

Each experiment follows a Bernoulli distribution, so we can apply the known expression for a confidence interval for n repetitions of a Bernoulli experiment, with a preset level of confidence a and an unknown standard deviation. Here, \(t_{a/2}\) represents a number such that \(P(T < -t_{a/2}) = a/2\), where T follows a Student’s t-distribution with \(n-1\) degrees of freedom (see [13], chapter 11.4).

$$\begin{aligned} a= & {} P\left( \overline{X} - t_{a/2} \frac{\hat{S}}{\sqrt{n}}< p< \overline{X} + t_{a/2} \frac{\hat{S}}{\sqrt{n}} \right) = P\left( \overline{X} - t_{a/2} \frac{\hat{S}}{\sqrt{n}}< \left( \frac{1 + \cos \beta }{2} \right) ^m\right. \\{} & {} \left.< \overline{X} + t_{a/2} \frac{\hat{S}}{\sqrt{n}} \right) =\\= & {} P\left( 2 \root m \of {\overline{X} - t_{a/2} \frac{\hat{S}}{\sqrt{n}}} - 1< \cos \beta< 2 \root m \of {\overline{X} + t_{a/2} \frac{\hat{S}}{\sqrt{n}}} - 1 \right) =\\= & {} P\left( \arccos \left( 2 \root m \of {\overline{X} + t_{a/2} \frac{\hat{S}}{\sqrt{n}}} - 1 \right)< \beta < \arccos \left( 2 \root m \of {\overline{X} - t_{a/2} \frac{\hat{S}}{\sqrt{n}}} - 1 \right) \right) \end{aligned}$$

Therefore, the confidence interval for \(\beta \) with a level of confidence a is

$$\begin{aligned} \left[ \arccos \left( 2 \root m \of {\overline{X} + t_{a/2} \frac{\hat{S}}{\sqrt{n}}} - 1 \right) , \arccos \left( 2 \root m \of {\overline{X} - t_{a/2} \frac{\hat{S}}{\sqrt{n}}} - 1 \right) \right] \end{aligned}$$

The study on combinatorial QADS showed that the original Hadamard test, with \(m=1\), was the most balanced option when \(\beta \) is unknown. An example of this experiment for \(n = 1500\) repetitions of the QADS, \(a = 0.05\), \(m = 1\) and random values of \(\beta \) can be seen in Fig. 15, along with the average length of the intervals. Observe that this confidence interval methodology requires that \(\beta \in [0, \pi ]\), although it can be adapted for greater values of \(\beta \) with the approach addressed in 4.4.

Fig. 15
figure 15

100 confidence intervals from the Hadamard test (\(m=1\)), for random \(\beta \) and a sample of size 1500

Dichotomy search The dichotomy search assumes that \(\beta \) is in the interval \([0, \pi ]\), which will be split into two halves. The values of DA\((m, g, \beta - \alpha )\), for \(\alpha = 0\) and \(\alpha = \pi \), will be approximated by a computation of multiple DA on each endpoint, and then obtaining the proportion of positive outcomes. A decision is then taken (\(\beta \in [0,\frac{\pi }{2}]\) or \(\beta \in [\frac{\pi }{2},\pi ]\)), and the process is repeated as many times as desired. The performance of the method benefits from bigger differences in the probabilities at the endpoints. So, in order for the geometric QADS to work, we require DA\((m, g_g, t)\) to be decreasing in the current interval. Therefore, for the n-th step of the algorithm, when the interval length is \(\pi /2^n\), we should pick \(m_g\) such that

$$\begin{aligned} \frac{2\pi }{2^{m_g}} \ge \frac{\pi }{2^n} \Leftrightarrow 2^{n + 1} \ge 2^{m_g} \Leftrightarrow m_g \le n +1. \end{aligned}$$

With this choice, the performances of the three types of functional QADS under study, for the fourth step of the algorithm, and for fixed \(m = 5\) and \(G = 31\), are plotted in Figs. 16 and 17, respectively. As before, it can be checked that geometric QADS are the family with better performance.

Fig. 16
figure 16

Difference in the probability of a positive outcome at the two endpoints of the interval in the fourth step of the dichotomy search for the combinatorial, linear and geometric QADS and a fixed \(m = 5\)

Fig. 17
figure 17

Difference in the probability of a positive outcome at the two endpoints of the interval in the fourth step of the dichotomy search for the combinatorial, linear and geometric QADS and a fixed \(G = 31\)

Comparing the three functional QADS-based methods. In Fig. 18, we can see the comparison between the dichotomy search, Hadamard test and \(\delta \)-approximation algorithm, where the latter clearly outperforms the other two for a choice of \(\delta = 0.01\) and approximately 18,000 uses of the matrix U. For the dichotomy search, we considered the number of iterations needed to end up with an interval of length \(\approx 2\delta \); for the Hadamard test, we fixed \(a = 0.05\).

Fig. 18
figure 18

Comparison of 1000 confidence intervals of the dichotomy search, Hadamard test and \(\delta \)-approximation algorithm, respectively

5.2 Comparison with QPE method

Since phase estimation is the main problem addressed, it is reasonable to, at least, have a first glance at a comparison between \(\delta \)-approximation algorithm and the other well-known phase estimation algorithms, mainly, the QPE, whose circuit can be checked in Fig. 19.

Fig. 19
figure 19

QPE circuit

We can observe that the circuit is exactly like the one of geometric QADS, with the exception of the final inverse of the QFT gate. We already saw in Fig. 4 how the \(UROT_k\) gates are applied in the QFT. The QPE circuit with t qubits would apply U \(2^t - 1\) times, and some \(UROT_k\) \((t-1) t / 2\) times, \(\forall k = 1,\ldots , t\). That makes a total of \(2^t - 1 + (t-1) t / 2\).

Despite applying controlled gates (U gates plus UROT gates) \(O(t^2)\) more times, we know that QPE estimates the phase in a single run, whereas the \(\delta \)-approximation algorithm needs many repetitions of geometric QADS. However, on the other hand, the accuracy of the \(\delta \)-approximation algorithm is higher for the same number of qubits. Thus, a more fair comparison would take into account the number of controlled gates of both algorithms when we fix a certain accuracy. Sometimes, the exponential powers of U are efficiently implementable, in the sense that there is no need to apply U exponentially many times, so this comparison would address rest of the situations.

The number of qubits needed in the QPE to obtain a certain accuracy is known [12, page 224]. In order to approximate \(\beta \) to an accuracy of \(2^{-n}\) with a probability of success of at least \(1 - \epsilon \), we should take

$$\begin{aligned} t = n + \Bigg \lceil \log _2 \left( 2 + \frac{1}{2\epsilon } \right) \Bigg \rceil \end{aligned}$$

Therefore, let us fix, for example, \(n = 7\) and run 10,000 times the \(\delta \)-approximation algorithm for \(\delta = 2^{-n} = 1/128\) to obtain a numeric approximation of its probability of error, which would represent \(\epsilon \). The result is shown in Fig. 20. The error obtained is \(\epsilon = 19/10000\), so

$$\begin{aligned} t = 7 + \Bigg \lceil \log _2 \left( 2 + \frac{5000}{19} \right) \Bigg \rceil = 7 + \lceil 8,0507 \rceil = 16. \end{aligned}$$
Fig. 20
figure 20

10,000 confidence intervals of the \(\delta \)-approximation algorithm for \(\delta = 1/128\)

Without the rounding, the result of t is usually around 15, so we will stick to that more generous number. Then the QPE circuit applies controlled gates \(2^{15} - 1 + 14 \times 15 / 2 = 32782\) times, significantly higher than the average 20316.98 of the \(\delta \)-approximation algorithm. In addition, the latter uses half the qubits (four for the first iteration, eight for the second). Therefore, the advantage is clear when there are no efficient implementations of exponential powers of U.

In the case that they were efficiently implementable, still, we know that the depth of a circuit is one of the greatest concerns when avoiding errors. The longer the circuit, the more error we will get. The possibility of beating the accuracy of QPE in NISQ (noisy intermediate-scale quantum) devices, especially when using a lot of qubits, with a simpler and shorter algorithm was pointed out, for example, in [14,15,16,17,18]. In that sense, the idea to estimate the phase through a short circuit, even if run multiple times, can be preferable under certain situations in order to avoid these imprecisions. Also, the reduction of qubits alleviates this further.

It is true that, by the use of semiclassical Fourier transform (Fig. 21), the preparation of qubits can be delayed and the measurements can be applied earlier, using just two qubits simultaneously and reducing the time they are involved. However, besides the fact that these actions do not decrease the number of operations and can be taken in the \(\delta \)-approximation algorithm too, there is no way of reducing the involvement of the qubits associated to \(|\varphi _0\rangle \) and U, so the circuit length would have a negative effect anyway.

Finding a theoretical distribution for this algorithm, in order to compare it more generally and independently from numeric approximations, deserves a deeper study.

Fig. 21
figure 21

The QPE circuit with a semiclassical QTF\(^\dagger \). Figure taken from [19]

5.3 Comparison with other methods

Optimal states. An alternative approach to the QPE is introduced in [19], where the optimal states and measurement basis for phase estimation are presented, along with an error distribution. For any \(N \in \mathbb {N}\), the optimal state is given by the formula:

$$\begin{aligned} |\psi _{opt}\rangle = \sqrt{\frac{2}{N+2}} \sum _{n=0}^N \sin \left( \frac{\pi (n+1)}{N+2} \right) |n\rangle . \end{aligned}$$

The optimal measurement POVM (positive operator-valued measure) has elements \((N+1)/2\pi |\hat{\phi }\rangle \langle \hat{\phi }| d\phi \), where

$$\begin{aligned} |\hat{\phi }\rangle = \frac{1}{\sqrt{N+1}} \sum _{n=0}^N e^{in\hat{\phi }} |n\rangle , \end{aligned}$$

and allows us to obtain an error distribution of

$$\begin{aligned} \frac{1}{\pi (N+2)} \left( \frac{\cos ((\hat{\phi } - \phi )(1 + N/2)) \sin (\pi /(2+N))}{\cos (\pi /(2+N)) - \cos (\hat{\phi } - \phi )} \right) ^2. \end{aligned}$$

In this situation, the system would need at least \(N = 1675\) to achieve an error below 1/128 with probability 19/10000. This would mean that \(m >10\), compared to the \(m = 8\) of the \(\delta \)-approximation algorithm. Besides, the measurements have to be approximated with adaptive measurements and, as pointed out before, the depth of the circuit will cause increasing noise and, therefore, a higher chance of ruining the results.

Iterative phase estimation [20]. Another interesting way of estimating a phase is the iterative phase estimation, where the number of measurements needed is decreased substantially. The author provides Table 2 for presenting the results, depending on the number of iterations, l, and the number of measurements in each of them, \(N_{tot}\).

Table 2 Number of trials out of 100,000 with \(|\beta - \alpha | \le 1/(2^l \times 3)\). Table taken from [20]

By inputting \(\delta = 1/(2^l \times 3)\) into the \(\delta \)-approximation algorithm, we found that, for the four different values of l considered in the table, it succeeds between 99,810 and 99,880 intervals, which clearly indicates that, at least, 30 measurements per iteration would be needed for the iterative method to beat it.

Sticking to the case \(l=7\), that would mean 210 measurements, while our algorithm performs around 6155 measurements. Even though the difference is huge, 6016 of those are invested in the initial Hadamard test to approximate \(\beta \), and only 139 in the rest. Still, those 139 measurements reduce the error from 1/10 to \(1/(2^7 \times 3)\). This emphasizes the fact that the second part of the \(\delta \)-approximation algorithm shows great potential to improve a previous estimation obtained by any method.

Kitaev’s method [21] and Faster Phase Estimation [22]. Finally, we compare the \(\delta \)-approximation method with the Kitaev’s method and Faster Phase Estimation method asymptotically. In [22, Table I] we can check the asymptotic evolution in width, depth and size of these two methods depending on m, the number of powers of U used, assuming it is efficiently implementable and using a sequential circuit.

In the case of the \(\delta \)-approximation method, the width would clearly be O(m). We already saw that the depth, in this case, seems similar to \(O(\log ^3 p)\), where p is the accuracy, and evolves exponentially as m increases. That would mean an \(O(m^3)\) depth. The size is equivalent to the depth when U is efficiently implementable.

The comparison between the three methods can be checked in Table 3. The \(\delta \)-approximation algorithm offers an advantage in width, and even though it seems to lose the race in terms of depth and size, we should keep in mind that, not only its theoretical asymptotic behaviour is yet unknown, but its efficiency depends on some parameters that are yet to be optimized, so the approximation in Fig. 14 is based on a particular case.

Table 3 Asymptotic behaviour of the Kitaev’s method, faster phase estimation and \(\delta \)-approximation method depending on mm the number of powers of U used

5.4 Inexact states

Another reasonable concern is the algorithm’s behaviour when not every component is exact. How an error in the preparation of the eigenvector evolves through the DA circuit is summarized in the following result.

Theorem 6

In the DA for geometric QADS, suppose \(|\varphi _0\rangle \) is the exact eigenvector of U, with eigenvalue \(e^{i\beta }\), that we are interested in, but the initial inexact state \(|\psi \rangle \) satisfies \(|\langle \varphi _0 | \psi \rangle |^2 = 1-\delta \). If \(\widetilde{\hbox {DA}}(m, g_g, |\psi \rangle , U, \alpha )\) is the new probability of error of the DA initialized in \(|\psi \rangle \), then:

  • If \(\hbox {DA}(m, g_g, \beta - \alpha ) \le \widetilde{\hbox {DA}}(m, g_g, |\psi \rangle , U, \alpha )\), then

    $$\begin{aligned} \left| \hbox {DA}(m, g_g, \beta - \alpha ) - \widetilde{\hbox {DA}}(m, g_g, |\psi \rangle , U, \alpha ) \right| < \delta ^2. \end{aligned}$$
    (2)
  • If \(\hbox {DA}(m, g_g, \beta - \alpha ) > \widetilde{\hbox {DA}}(m, g_g, |\psi \rangle , U, \alpha )\), then

    $$\begin{aligned} \left| \hbox {DA}(m, g_g, \beta - \alpha ) - \widetilde{\hbox {DA}}(m, g_g, |\psi \rangle , U, \alpha ) \right| < 4(\delta - \delta ^2). \end{aligned}$$
    (3)

In 3, the theorem is proven and a further study of bound (3) is done, which shows that a more realistic bound would be \(3.428\delta - 3.093\delta ^2\). However, from Lemma 2 (in 3) we can see that, if \(\lambda _k\) are the eigenvalues of U, with \(\lambda _0 = \beta \), then \(\forall k>0,\)

  • if \(\lambda _k = \beta \), then \(\left| \hbox {DA}(m, g_g, \beta - \alpha ) - \widetilde{\hbox {DA}}(m, g_g, |\psi \rangle , U, \alpha ) \right| = 0\);

  • if \(\lambda _k = \beta \pm \pi /(2^m - 1)\), then \(\left| \hbox {DA}(m, g_g, \beta - \alpha ) - \widetilde{\hbox {DA}}(m, g_g, |\psi \rangle , U, \alpha ) \right| \) might rise to \(3.428\delta - 3.093\delta ^2\);

  • if \(\lambda _k = \beta \pm \pi /2^{m-1}\), then \(\left| \hbox {DA}(m, g_g, \beta - \alpha ) - \widetilde{\hbox {DA}}(m, g_g, |\psi \rangle , U, \alpha ) \right| \le 2\delta - \delta ^2\).

As we can see, the greatest error is caused when all eigenvalues fall under the second case, and, if we are aware of this situation, it can be avoided completely by just using one less qubit. Also, we are working with the difference between two functions, so even in that worst-case scenario, the bound would be approached in a very small range of values of \(\alpha \), and usually for \(\alpha \approx \beta \), when the algorithms have almost perfect accuracy, so the error would have a less worrying effect. Finally, since this is an error on DA\((m, g_g, \beta - \alpha )\), which is a probability function, a different probability does not imply a different outcome; there is still the chance of getting the correct result anyway and, therefore, the error would have no consequences.

Nevertheless, it is obvious that a deeper study on this propagation for the IC and beyond is necessary and will be addressed in future works, as well as considering other types of errors.

6 Conclusions and future work

In this work, we deepen the study of the QADS framework introduced in [2], and extended in [11]. QADS, beyond their original detection purpose, have proven to be also useful in some practical applications related to phase estimation of eigenvalues of a unitary matrix.

We have introduced the new family of functional QADS, and studied basic properties, such as the amplitude of the initial state at the end of the circuit, or finding conditions for them to have a \(\delta \)-detecting time. In addition, the class of geometric QADS has been shown to be especially suitable for applications, and the explanation of the Quantum Fourier Transformation. For instance, geometric QADS are optimal for the decision algorithm on an eigenvalue of a unitary matrix for a fixed number of qubits and for a fixed number of \(2^n - 1\) operations.

In future works, we want to explore more applications of functional QADS, and study variations of those here introduced. For instance, a change on the initial or final Hadamard gates by other rotations like QFT, which would lead to other known algorithms like the Quantum Phase Estimation.

Also, the \(\delta \)-approximation algorithm deserves a more thorough study in future works. There are still some possible improvements to be made, as well as optimizing the selection of certain variables that affect its accuracy and size, such as the factor that separates every \(\delta _i\) from \(\delta _{i+1}\) or the number of operations invested in finding \(\alpha _0\). Using the algorithm as a way of improving the accuracy of any given phase estimation method is also worth studying. Although it shows competitive results when only short circuits can be executed, a theoretical distribution for this algorithm would be really helpful for comparisons with other phase estimation methods and for studying the propagation of errors in the circuit, and will be explored in future works.