Introduction
Excitation transfer in the single-excitation subspace of a ring of spin-1/2 particles coupled via XXZ couplings forms a simple model for information transfer in a spintronic router (Langbein et al., Reference Langbein, Schirmer and Jonckheere2015; Schirmer et al., Reference Schirmer, Jonckheere and Langbein2018). Design of controls for such systems is nontrivial. Most fundamentally, measurement of the quantum state in the usual feedback control paradigm would alter the dynamics of the quantum system in a probabilistic manner (Wiseman and Milburn, Reference Wiseman and Milburn2009). Additionally, the coherent dynamics of a quantum system result in trajectories that evolve unitarily with all eigenvalues on the imaginary axis and are thus not asymptotically stable (Weidner et al., Reference Weidner, Schirmer, Langbein and Jonckheere2022). Taken together, this precludes the application of common linear control techniques such as pole placement and Linear-Quadratic-Gaussian (LQG) design.
To obviate such roadblocks to development of classically inspired controls, the work this analysis is based on appeals to the solution of a nonconvex optimization problem to generate optimal, time-independent controllers (Langbein et al., Reference Langbein, Schirmer and Jonckheere2015). The controllers considered are designed to alter the energy landscape of a quantum ring via static bias fields to facilitate the transfer of a single excitation from an initial spin |IN⟩ to a target spin |OUT⟩ with maximum fidelity at a given time T or over a (readout) time window T ± Δ/2 under unitary dynamics.
While design of realizable controllers is a challenge, ensuring these controllers’ robustness to external perturbations or parameter uncertainty is necessary to fully harness any benefits of emerging quantum technology (Glaser et al., Reference Glaser, Boscain, Calarco, Koch, Köckenberger, Kosloff, Kuprov, Luy, Schirmer, Schulte-Herbrüggen, Sugny and Wilhelm2015; Shermer, Reference Shermer2023). To progress from the current Noisy Intermediate-Scale Quantum (NISQ) era and turn theoretical promises into reproducible experimental realities, the need for robustness of quantum control systems emerges with accrued urgency. This is reminiscent of the situation in classical control starting nearly half a century ago, when super-maneuverable aircraft became a reality, and flight-by-wire control systems took over pilots’ inputs to counter the uncertainty in the airframe model at the edge of the flight envelope – a concept that became known as robustness and has ultimately led to the development of classical robust control. In the quantum arena, various control designs for specific applications claiming robustness have been proposed (Daems et al., Reference Daems, Ruschhaupt, Sugny and Guérin2013; Deng et al., Reference Deng, Barnes and Economou2017; Shapira et al., Reference Shapira, Shaniv, Manovitz, Akerman and Ozeri2018; Wu et al., Reference Wu, Ding, Dong and Wang2019; Güngördü and Kestner, Reference Güngördü and Kestner2019; Dridi et al., Reference Dridi, Liu and Guérin2020; Koswara et al., Reference Koswara, Bhutoria and Chakrabarti2021; Kosut et al., Reference Kosut, Bhole and Rabitz2022; Ram et al., Reference Ram, Krithika, Batra and Mahesh2022; Zhang et al., Reference Zhang, Wu, Yang, Wang, Liu, Zhao, Ma, Li and Zhang2022; Valahu et al., Reference Valahu, Apostolatos, Weidt and Hensinger2022), but a comprehensive framework for robust control for quantum systems is lacking.
Among the ad hoc techniques that have been developed, some have challenged physical limitations such as the Heisenberg limit, but quantum robustness has not yet matured into a theory of control limitations – parallel to the very successful robustness theory developed in the 1980s for classical control systems (Safonov et al., Reference Safonov, Laub and Hartmann1981), which led to the formulation of quantifiable limitations on achievable performance in terms of accuracy versus sensitivity of the accuracy to uncertainties. Unfortunately, the unique characteristics of quantum systems present challenges in analyzing the robustness of control schemes in the context of classical robust control. The marginal stability of open quantum systems precludes the use of common small gain theorem-based techniques such as structured singular value analysis (Zhou and Doyle, Reference Zhou and Doyle1998) in most cases. Also, in contrast to classical control problems based on asymptotic response, excitation transfer is an inherently time-domain problem requiring a time-domain view of robustness that differs from classical frequency-domain methods (Sontag, Reference Sontag1998; O’Neil et al., Reference O’Neil, Schirmer, Langbein, Weidner and Jonckheere2022).
In this analysis paper, we explore the design of time-optimal controllers published as Langbein et al. (Reference Langbein, O’Neil and Shermer2022) and analyze their robustness through a time-domain logarithmic sensitivity measure. The correlation between error and log-sensitivity of the controllers in this data set was first explored in Jonckheere et al. (Reference Jonckheere, Schirmer and Langbein2018) and identified nonconventional trends for controllers optimized for time-windowed readout. In this paper, we expand the analysis and include controllers optimized for instantaneous readout to better understand the robustness of the entire range of possible controllers, leading to the identification of factors that yield greater robustness. The analysis shows that controllers optimized for exact-time excitation transfer exhibit behavior in the trade-off between robustness and performance expected of a classical feedback control system. In contrast, those controllers optimized to maximize transfer over a time-window display trends between performance and robustness in contradiction to expectations fram classical control. Furthermore, in this analysis, we apply a modified log-sensitivity calculation that accounts for averaging over the readout window, a factor not accounted for in previous work.
The remainder of this paper is organized in the following manner. In Section 2.1, we present the mathematical model for a spin-1/2 ring, derive the evolution for excitation transfer and define the performance measure of fidelity. In Section 2.2, we present the optimization scheme for maximizing the fidelity, and in Section 2.3, we define the time-domain log-sensitivity used to gauge the robustness of the controllers. In Section 3, we present the hypothesis testing used to judge the conventional versus nonconventional relationship between performance (measured as the fidelity) and robustness (measured as the size of the log-sensitivity). We then present the results of the hypothesis test and identify additional robustness features not highlighted by the statistical analysis. We conclude in Section 4.
Methods
System description, dynamics and fidelity
Consider a set of N interacting spin-1/2 particles with only one spin in an excited state and the remainder in the ground state. In this single-excitation subspace, the network can be represented by a N × N total Hamiltonian H 0 with
Here, J mn are the couplings between spins m and n, measured in units of frequency, and ℏ is the reduced Planck constant. In general J mn = J nm , and for a ring topology with nearest-neighbor coupling, J mn is only nonzero for n = m ± 1 and J 1N = J N1. In particular, we consider the case of uniform coupling, where all nonzero couplings have the same value J. The terms X n , Y n , Z n are the Pauli spin operators acting on spin n. These are N-fold tensor products whose nth factor is one of the Pauli matrices
and all other factors are the 2 × 2 identity matrix I. The parameter κ distinguishes different coupling types such as XX-coupling (κ = 0) or Heisenberg coupling (κ = 1); specifically, we consider XX-coupling. We justify this restriction to XX-coupling based on the control scheme introduced in Section 2.2. In short, this scheme is based on spin-addressable bias fields modeled as diagonal elements of the Hamiltonian. As Heisenberg coupling introduces purely diagonal coupling terms into the Hamiltonian, they can be absorbed into the diagonal control elements so that the system model is equivalent to a strictly XX-coupled system.
We represent the state of the system by a wavevector |ψ⟩ ∈ $$\mathbb{C}$$ N whose nth entry represents the state of spin n. We only consider normalized wavevectors such that |⟨ψ|ψ⟩|2 = 1. Specifically, if spin n is measured to be in the excited state with absolute certainty, the nth entry of |ψ⟩ has magnitude 1. Conversely, if the spin has zero probability of being excited the entry is 0, indicating the spin is in the ground state. A value of 0 < |ψ n | < 1 indicates the nth spin has a non-zero probability to be excited. If the state |ψ⟩ differs from the state |ψ 0⟩ only by a phase factor e iφ then |⟨ψ o |ψ⟩|2 = 1. Associating the N state vectors {|ψ n ⟩} which indicate a single excitation on spin n with the natural basis vectors of $$\mathbb{C}$$ N provides a convenient basis for describing the system dynamics.
In this basis, considering only XX-coupling and ring topology, the Hamiltonian of (1) takes the explicit form
The dynamical evolution of this system is governed by the time-dependent Schrödinger equation:
Assuming a system of units where ℏ = 1, the solution to (3) is
Noting that H 0 is Hermitian with real eigenvalues, we can immediately see that the eigenvalues of the open-loop system are purely imaginary, and so the system is not stable, but only marginally stable (Chen, Reference Chen2013). In simplest terms, this means there is no asymptotic steady state of the system, as evident from the eigenvalues of the form { − iλ n } N n = 1. This presents two conflicting issues in the control of closed quantum systems. On the one hand, unitary evolution of the system is desirable in retaining the coherence or phase of the system, which is a key feature that gives quantum technology an advantage over classical technologies. On the other hand, the techniques of classical control theory (pole placement, LQG, etc.) require synthesis of stabilizing controllers (Dorf and Bishop, Reference Dorf and Bishop2000). While this is prudent from a classical point of view in that stabilizing controllers preclude the possibility of an unbounded response, applied to a quantum system, this would result in convergence to a classical steady state, resulting in the loss of coherence. This provides a strong motivation for the development of control techniques outside the scope of established classical feedback control.
We now consider the problem of transferring the single excitation of the system from a given input spin |IN⟩=|ψ 0⟩ to a specific output spin |OUT⟩. At a given time T the probability that |ψ(T)⟩=|OUT⟩ modulo the global phase e iφ is given by the squared overlap of the current state with the target state or
where $$\cal{F}$$ (T) is the fidelity of the transfer at time T. Extending this concept to a time-window of ± Δ/2 about the time T, we define the time-averaged fidelity as
Finally, noting that the upper bound on both $$\cal{F}$$ (T) and $$\cal{F}$$ (T±Δ/2) is unity, we define the fidelity error in analogy to the tracking error as
Design goals and optimization scheme
Consider the design goal of maximizing the fidelity for the instant time case (5) or the time-averaged case (6). To obviate the issues of backaction involved in measurement-based feedback control, we introduce control via static bias fields that ideally address a single spin to alter the energy landscape of the system. In terms of the Hamiltonian (2), these control fields take the form
Here, the D n ∈ $$\mathbb{R}$$ N × N consist of all zeros, save for the nth diagonal element which assumes the scalar value d n of the field addressing spin n. This augments the natural Hamiltonian so that H D = H 0 + D. The state transition matrix is thus modified as U D (t) = e −it(H 0 +D), and the expressions for the fidelity in (5) and (6) are similarly modified.
Maximization of the fidelity at a specific time T or over a window T ± Δ/2 then becomes a non-convex optimization problem of the form
or
Here, $\mathbb X$ defines the set of admissible controllers D n and readout times T defined by the optimization constraints.
The controllers used in this study were developed using the MATLAB’s fminunc solver with the BFGS quasi-Newton algorithm. The optimization was performed with a bias toward producing high fidelity controllers by choosing start times corresponding to high-fidelity peaks in the transfer of an equivalent chain between |IN⟩ and |OUT⟩ as initial values for the time variable. Furthermore, we placed symmetry conditions on the possible values of D n . Specifically, the $D_{n}=d_{n}|n\rangle \langle n|$ were constrained so that d IN = d OUT and d IN + k = d OUT − k for k ∈ {1…⌈(OUT−IN)/2⌉}. See Langbein et al. (Reference Langbein, Schirmer and Jonckheere2015) for a more detailed exposition of the optimization more detailed exposition of the optimization and constraints.
Robustness measure: log-sensitivity
Given a system model and controls to maximize the fidelity, we consider the issue of robustness of the control scheme to uncertainty in the system parameters or control fields. We denote an uncertain parameter (coupling coefficient or bias field) as ξ μ ∈ $$\mathbb{R}$$ such that
so that $\mu \in \{1,\ldots,N\}$ correspond to perturbations to the control and $\mu \in \{N+1,\ldots,2N\}$ correspond to perturbations to the Hamiltonian. Here, δ μ ∈ $$\mathbb{R}$$ represents the deviation from the nominal value in compatible physical units with ξ μ .
These uncertainties enter the Hamiltonian through structure matrices S μ ∈ $$\mathbb{R}$$ N × N . The uncertain Hamiltonian becomes ${{\tilde H}}_D = {H_0} + D + \sum { _\mu }{\delta _\mu }{S_\mu }$ . Specifically we define
Consequently, we have the uncertain state-transition matrix as $\tilde U\left( t \right) = {e^{ - it\left( {{H_0} + D + \mathop \sum \nolimits_\mu {\delta _\mu }{S_\mu }} \right)}}$ . Considering a single uncertain parameter in the Hamiltonian, we look at the differential sensitivity of the state transition matrix to that parameter as
where ξ μ is defined as in (11) with nominal value given by ξ μ0 when δ μ = 0.
We note the differential sensitivity of (13), in both equivalent forms, is valuable in its own right to measure the effect of parameter uncertainty on e(T). However, this pure differential sensitivity carries an intrinsic scaling by the physical units of the parameter in the denominator of the limit. While this permits a useful comparison in sensitivity for the same type of uncertainty, it does not provide an unbiased measure for comparing robustness between different uncertainty categories. For this reason, we seek a dimensionless measure of robustness in the logarithmic sensitivity, requiring renormalization of the terms in (13) by U †(T)ξ μ0 or U †(T)δ μ0. Even though ∂/∂ξ μ = ∂/∂δ u , these two normalization factors result in different log-sensitivities. This is obvious by noting that U †(T)ξ μ0 ≠ 0 while U †(T)δ μ0 = 0. Finally, observe that if the uncertain parameter has a nominal value of zero, the log-sensitivity formulation above requires modification to consider only deviations from the nominal value while producing a non-trivial measure of sensitivity.
Noting that the performance measure $$\cal{F}$$ (⋅) is time-based, we assess the robustness of the control scheme by determining the differential effect of uncertainty on the fidelity error e(T) or e Δ(T) as defined in (7) (equally the fidelity) for instantaneous readout as
and for time-windowed readout as
We see that (14) is the differential sensitivity of the fidelity error normalized by the ratio of the nominal parameter value and nominal fidelity error.
Consider a decomposition of $e^{-it({H_{0}}+D)}=\sum _{n=1}^{N} \Pi _{n}e^{-it{\lambda _{n}}}$ and let ω mn = λ m − λ n . Here Π n are the projectors onto the orthogonal subspaces of the controlled Hamiltonian H D . Specifically, from the spectral decomposition of H D = VΛV †, λ n is the nth diagonal entry of Λ and Π n is the dyadic product of the nth column of V with itself or Π n = V n V n †. Then, for e(T) = 1 − $$\cal{F}$$ (T), we have from Schirmer et al. (Reference Schirmer, Jonckheere and Langbein2018)
where ${\rm sinc}(x)={\sin (x) \over x}$ . For e Δ(T) = 1 − $$\cal{F}$$ (T±ΔT) we have a more complicated expression,
Note that for λ n = λ m = λ p there is no contribution to the sum. Specifically we have
and
We use the differential sensitivity established by (16) and (17), normalized by the ratio ${\xi _{\mu 0} \over e(T)}$ or ${\xi _{\mu 0} \over e_{\Delta }(T)}$ , to get a nontrivial, i.e., nonvanishing, log-sensitivity as robustness measure.
Analysis
Our analysis of the controllers produced by the optimization in Section 2.2 consists of two parts: (1) statistical hypothesis testing of the relationship between performance, as measured by the size of the fidelity error, and robustness, gauged by the size of the log-sensitivity and (2) identification of areas that require more exploration to explain the observed robustness properties.
Classical control considerations
To relate to classical robustness as constructed in the 1980s and motivate the hypothesis tests performed, we compare the problem of state transfer |IN⟩→|OUT⟩ under a Hamiltonian H D containing the control terms considered in this paper, itself a paradigmatic problem in quantum control, with the classic paradigmatic problem of transfer of the zero input state 0 to a constant output state 1 in the simple Single Input Single Output (SISO) control, chosen for ease of the exposition.
The accuracy of such transfer, be it quantum or classical, can be formulated in terms of a sensitivity operator that maps the desired output to the tracking error, here defined as the difference between the desired output and the actual output,
where 1(t) is the Heaviside unit step, δ(t) the Dirac delta and T(t) is the impulse response of the control system from the desired output to the actual output. Rewriting the above relationship as $\varepsilon (t)=\int _{0}^{t} S(t-\tau )1(\tau )d\tau$ defines the sensitivity operator S, quantifying accuracy. Since the seminal work of (Bode, Reference Bode1945) motivated by feedback amplifiers, classical control has formulated the limitations in terms of the Laplace transforms of the operators, Ŝ(s) and T̂(s). Elementary manipulation in the Laplace domain reveals that T̂(s) is the log-sensitivity of Ŝ(s) relative to unstructured perturbations and the operators satisfy the fundamental limitation
forbidding simultaneous near zero error and near zero sensitivity. Only recently (O’Neil et al., Reference O’Neil, Schirmer, Langbein, Weidner and Jonckheere2022) has this Laplace domain limitation begun to be understood in the time-domain, which is essential for quantum problems where readouts happen at a specific time.
The quantum transfer error can be formulated similarly. However, when dealing with state transfer problems involving wavefunctions or pure states (as we do here), there is an additional complication from a tracking error point of view, in that such quantum states as |IN⟩ and |OUT⟩ are defined only up to a global phase factor exp (iφ). This means that the true tracking error is the projective error $\varepsilon _{\rm proj}(t)=|{\rm OUT}\rangle -\exp (i\varphi )\exp (-iH_{D}t)|{\rm IN}\rangle$ . Defining the input/output swapping operator W by |IN⟩=W|OUT⟩, the preceding can be rewritten as
Except for the phase factor, the connection between (20) and (22) is obvious. The phase is used to bring the error below the classical limitation by defining
Elementary complex analysis reveals that $\parallel \varepsilon _{\rm proj}^{{\rm *}}\parallel ^{2}=2(1-F)$ , where $F\colon =|\langle{\rm OUT|\ exp}(-iH_{D}t)|{\rm IN}\rangle |$ is the overlap between desired and actual states rather than the fidelity $$\cal{F}$$ = F 2. Nevertheless, for very high fidelity, $\parallel \varepsilon _{\rm proj}^{{\rm *}}\parallel ^{2}\approx (1-\cal F)$ . The sensitivity of the latter will be our major concern. The difficulties in the analysis arising from the global phase can also be avoided by formulating state transfer problems in the density operator formalism.
Rewriting (22) as $\varepsilon _{\rm proj}^{{\rm *}}(t)=S_{\rm proj}(t)|{\rm OUT}\rangle$ , where $S_{\rm proj}(t)$ is the projective or quantum sensitivity function , it follows that, at the limit $$\cal{F}$$ ↑ 1,
Moreover, from (23) and ${\cal{F}}=|\langle{\rm OUT}|T(t)\rangle |^{2}$ , where |T(t)⟩=exp(−iH D t)|IN⟩ is the closed-loop transfer impulse response, it follows that
To make the above a limitation, let us remove the phase factor in $S_{\rm proj}(t)$ , in which case it is easily seen that dS φ = 0(t)|OUT⟩ = itT(t)dH D for [H D , dH D ] = 0. In other words, as in classical control, T(t) is the sensitivity of the sensitivity function.
The connection between the classical (21) and the quantum limitation (24) is obvious, but it indicates that this limitation is still classical. However, incorporating the phase factor in the sensitivity function, as done in Jonckheere et al. (Reference Jonckheere, Schirmer and Langbein2019), could alleviate it.
Hypothesis test
We establish the following two-tailed hypothesis test to confirm or refute whether the controllers in our data set (Langbein et al., Reference Langbein, O’Neil and Shermer2022) conform to the conventional limitations on robustness and performance established above. For brevity in the following section we describe the hypothesis testing in terms of e(T) versus s(ξ μ0,T), but the conditions apply equally to e Δ(T) and s Δ(ξ μ0,T):
-
H 0: null hypothesis postulating no trend between s(ξ μ0,T) and e(T),
-
H 1+: alternative hypothesis one postulating positive correlation between s(ξ 0,T) and e(T) and indicative of controllers that do not exhibit the conventional limitation on performance and robustness,
-
H 1−: alternative hypothesis two postulating negative correlation between s(ξ 0,T) and e(T) and indicative of controllers that exhibit the conventional limitation on performance and robustness.
To execute the test we chose two distinct correlation measures: the Kendall τ as a nonparametric test based on rank -ordering of the data (Abdi, Reference Abdin.d.) and the Pearson r linear correlation coefficient to test the linear relation between the two metrics on a log-log scale. We chose ring sizes from N = 3 to N = 20. For all controllers examined, the initial state is taken as |IN⟩=|1⟩ so that the excitation is initially located at spin 1. For the time-windowed readout controllers (the dt controllers) we tested excitation transfer ranging from localization at the initial spin |IN⟩=|OUT⟩ = |1⟩ up to $|{\rm OUT}\rangle =\left| \left.\left\lceil {N \over 2}\right\rceil \right\rangle \right.$ . For the instant readout case (the t controllers) we consider transfers from |OUT⟩=|2⟩ through $|{\rm OUT}\rangle =\left| \left.\left\lceil {N \over 2}\right\rceil \right\rangle \right.$ . We note that there is nothing unique in the selection of |1⟩ as the initial spin as the ring is rotationally symmetric. Likewise, consideration of transfers only up to $\left\lceil {N \over 2}\right\rceil$ is justified by the symmetry of the ring as well. This provides a total of 90 test cases for the instantaneous readout controllers and 108 test cases for the time-windowed readout case. Though a complete set of 2000 controllers exists for each possible transfer, we exclude controllers that yield a fidelity $$\cal{F}$$ < 0.9, and base our analysis on the remaining controllers, maintaining consistency with the analysis in Jonckheere et al. (Reference Jonckheere, Schirmer and Langbein2018).
To compute the degree of correlation between e(T) and s(ξ 0,T) for each ring and transfer combination, we apply the corr(⋅,⋅) function from MATLAB with the ‘Kendall’ option to produce the Kendall τ and ‘Pearson’ to generate the Pearson r. With the raw Kendall τ and Pearson r we establish the threshold for statistical significance at α = 0.01 to reject H 0 in favor of H 1+ for a positive (rank) correlation coefficient and in favor of H 1− for a negative (rank) correlation coefficient. We judge the level of significance for each possible test case depending on the correlation coefficient used. For the Kendall τ we normalize by the standard deviation so that $Z_{\tau }=\tau \left(\sqrt{{2(2n+5) \over 9n(n-1)}}\right)^{-1}$ where n is the number of samples (controllers) within the test case. We then quantify the statistical significance of the results through their p-values defined as
where Φ is the normal cumulative distribution function. To evaluate the statistical significance of the Pearson r, we translate the raw correlation coefficient to a t-statistic through $t_{r}=r\left(\sqrt{{1-r^{2} \over n-2}}\right)^{-1}$ . We then quantify the statistical significance of the test for a given value of r as
where $\cal S$ represents the cumulative Student’s t-distribution.
Finally, though we are generally looking at the trend of s(ξ μ0,T) versus e(T), there are a total of 2N perturbation directions to examine for each excitation transfer. To streamline the analysis, we focus specifically on three categories of perturbation within each possible transfer and ring:
-
Norm over the N controller perturbations – in this case we examine the trend of e(T) versus $\parallel s({\xi _{\mu 0}},T){\parallel _C} = \sqrt {\mathop \sum \limits_{\mu = 1}^N |s({\xi _{\mu 0}},T){|^2}} $ .
-
Norm over the N Hamiltonian uncertainties – in this case we examine the trend of e(T) versus $\parallel s({\xi _{\mu 0}},T){\parallel _H} = \sqrt {\mathop {\mathop \sum \limits_{\mu = N + 1} }\limits^{2N} |s({\xi _{\mu 0}},T){|^2}} $ .
-
Norm of all 2N uncertainties – in this case we examine the trend of e(T) versus $\parallel s({\xi _{\mu 0}},T)\parallel = \sqrt {\mathop {\mathop \sum \limits_{\mu = 1} }\limits^{2N} |s({\xi _{\mu 0}},T){|^2}} $ .
We present the results in the following section in terms of these uncertainty categories.
Hypothesis test results
The entire spreadsheet depicting the results of hypothesis test is available in the repository Langbein et al. (Reference Langbein, Shermer and O’Neil2023). We present the following summary of significant deductions from the hypothesis test.
Instant readout controllers (t controllers)
The trend between e(T) and each normed measure of s(ξ μ0,T), measured by the Kendall τ for rank correlation, is overwhelmingly conventional, showing a negative correlation between error and log-sensitivity, save for the transfer from spin 1 to spin 2 or nearest-neighbor transfer. For nearest-neighbor transfer the hypothesis test rejects H 0 in favor of H 1+ for all nearest-neighbor transfers for N ≥ 7. None of the tests for the nearest-neighbor transfer fail to meet the α = 0.01 threshold and are thus considered reliable. Though not the complete list of results, Table 1 provides a snapshot of the hypothesis test for the correlation between e(T) and ∥ s(ξ μ0,T)∥ for N = 3 to N = 12. In detail:
-
For the e(T) versus ∥ s(ξ μ0,T)∥ correlation, five of the 90 tests fail to achieve a significance level of α = 0.01 and are excluded. Of the remaining tests, all display a conventional negative trend, save for the nearest-neighbor transfers noted above.
-
Of the 90 tests for e(T) versus ∥ s(ξ μ0,T)∥ C , all but nine display a conventional trend with a confidence of at least 99%. Of these nine tests, all fall into the category of nearest-neighbor transfer, seven display a p-value greater than α and are discarded, and the other two display a nonconventional positive trend.
-
The tests for e(T) versus ∥ s(ξ μ0,T)∥ H follow the same pattern as that of ∥ s(ξ μ0,T)∥ with nearest-neighbor transfers displaying a non-conventional trend with high confidence, except for N < 6 cases. Of the remaining tests, all show a conventional trend except for nine cases that fail to meet the required confidence level.
As a check on consistency, we compare the hypothesis test results based on the rank-correlation of the Kendall τ with the results based on the linear correlation coefficient of Pearson’s r. Though not identical, the hypothesis tests based on each measure show strong agreement as summarized below:
-
For the ∥ s(ξ μ0,T)∥ tests, the hypothesis tests provide identical results in terms of acceptance or rejection of H 0 with two exceptions, neither of which affect the non-conventional trend for nearest-neighbor transfer. For the Pearson r-based test, the N = 6, 1 → 2 test does not display the confidence to reject the null-hypothesis as in the Kendall τ-based test. Conversely, while the N = 12, 1 → 6 transfer is unable to reject H 0 for the Kendall τ test, the Pearson r test does reject the null hypothesis in favor of H 0−.
-
The comparison for ∥ s(ξ μ0,T)∥ C shows strong consistency, agreeing in rejection of H 0 in favor of H 1− for all transfers except for N ≥ 11 nearest-neighbor transfers with one exception – the Pearson r test is inconclusive for the N = 10 nearest-neighbor transfer. Of the remaining nine nearest-neighbor tests, the Pearson r test provides higher confidence, with seven of the nine rejecting H 0 in favor of H 1+ with high confidence.
-
The Pearson r-based hypothesis test for e(T) versus ∥ s(ξ μ0,T)∥ H agrees with the Kendall τ in rejection of H 0 for all nearest-neighbor transfer for N ≥ 6 but displays ten other cases with failure to reject H 0 compared to nine for the Kendall τ test.
Time-windowed readout controllers (dt controllers)
The trend between e Δ(T) and the normed measures of ∥ s Δ(ξ μ0,T)∥ show a more complicated pattern than that of the t controllers, neither clearly conventional nor nonconventional. Rather, the overall trend shows a nonconventional positive correlation between e Δ(T) and ∥ s Δ(ξ μ0,T)∥ for target spins of |OUT⟩ = 1 to |OUT⟩ = 4 but a conventional, negative trend for transfers with |OUT⟩ ≥ 5. However, specifically for the tests concerning e Δ(T) versus ∥ s Δ(ξ μ0,T)∥ H , the test results in uniform refutation of H 0 in favor of H 1− for the localization cases where |OUT⟩ = 1 and with p < α = 0.01 for all tests. Table 2 provides a characteristic example of the Kendall τ-based hypothesis test for e Δ(T) versus ∥ s Δ(ξ μ0,T)∥ H for N = 3 though N = 12. In summary of the Kendall τ-based hypothesis test for the time-windowed controllers we observe the following:
-
Of the 108 test cases for the trend in e Δ(T) versus ∥ s Δ(ξ μ0,T)∥, 21 fail to meet the minimum confidence level and are not considered. However, for the 66 cases of localization (|OUT⟩ = 1) or transfers to |OUT⟩ ≤ 4, only three fail to meet the required confidence level. Of the remaining 63 tests for localization or transfer up to |OUT⟩ = 4, the hypothesis test rejects H 0 in favor of H 1+, a non-conventional trend. In contrast, of the 42 tests for transfer to |OUT⟩ ≥ 5, 18 fail to meet the required confidence level. However, the remaining 24 tests all display a negative, conventional trend, for these transfers.
-
For the tests of e Δ(T) versus ∥ s Δ(ξ μ0,T)∥ C , we see a higher percentage of tests that fail to meet the minimum confidence level, 36 of 108. In terms of trends, all localization or nearest-neighbor transfers show a nonconventional trend for sensitivity to controller uncertainty. Of the 56 tests for transfers to |OUT ≥ 4⟩, 24 fail to make the cut, but the remaining 32 test all show a conventional trend. Finally, we note that of the 16 next-nearest-neighbor transfers, 14 do not show a p < 0.01, and the two that do, for N = 5 and N = 6 display the nonconventional behavior.
-
The relation between e Δ(T) and ∥ s(ξ μ0,T)∥ H shows a solid trend of conventional behavior for localization with a nonconventional trend for transfer to spins |OUT ≤ 4⟩, but inconclusive results for the remaining cases. Specifically of the 18 localization tests, all show a conventional trend with high confidence. Conversely, of the 48 cases of transfer for |2 ≤ OUT ≤ 4⟩, all display a positive, nonconventional trend with p < 0.01. However, the remaining 42 test cases fail to display a clear trend with the majority, 32, failing to meet the required confidence level and the remainder displaying no clear trend.
As a check on consistency, we compare the Kendall τ-based hypothesis test results with that obtained from the Pearson r. As with the case of the instant-readout controllers, we see strong agreement between the two measures:
-
For e Δ(T) versus ∥ s Δ(ξ μ0,T)∥, the 66 test cases for localization through |OUT⟩ ≤ 4, disagree in only three cases. The Kendall τ provides inconclusive results for N = 6, 1 → 3 transfer and N = 8 localization, while the Pearson r results show non-conventional trends for these transfers but is inconclusive on the N = 7, 1 → 4 transfer. In the remaining 63 test cases for |OUT⟩ ≤ 4, the tests agree on a non-conventional trend. While of the remaining 42 test cases, the Pearson r results in 21 inconclusive tests versus 20 for the Kendall τ, all cases in which both tests present p < 0.01 agree on a conventional trend for these transfers.
-
For controller uncertainty, the e Δ(T) versus ∥ s(ξ μ0,T)∥ C trends show perfect agreement in rejecting H 0 in favor of H 1+ for all localization and nearest-neighbor transfers. In terms of the next-nearest-neighbor transfers (those transfers to $|{\rm OUT}=3\rangle$ ), the Pearson-based test agrees with the Kendall τ-based test in rejection of H 0 in favor of H 1+ for N = 5 and N = 6. However, for the remaining 14 next-nearest-neighbor transfer, the Pearson r test statistic provides inconclusive results. For the 56 test cases for $|{\rm OUT}\geq 4\rangle$ , the Pearson r-based test returns 18 instances that fall below the confidence threshold. However, in all cases where both the Kendall τ and Pearson r present high confidence, the hypothesis test agrees in rejection of H 0 for H 1− for these transfers.
-
Of the 108 test cases for e Δ(T) versus ∥ s(ξ μ0,T)∥ H , we see agreement between both measures in 100 cases. The eight conflicts arise from one test or the other failing to reject the null hypothesis while the other does reject H 0, but in no cases to both tests reject H 0 in favor of opposing alternative hypotheses. Of note, for the conventional trend of localization assessed by the Kendall τ-based test, the Pearson r test agrees on all counts save for N = 5 and N = 12, which are inconclusive based on the Pearson r.
Equivalent error: widely varying robustness
Though the hypothesis test of Section 3.3 provides insight into the trends of error versus log-sensitivity on a large scale, it does not tell the entire story. In fact, one of the more interesting features of the controllers in this data set is the range of log-sensitivity observed for a given fidelity error. Figure 1 displays the log-sensitivity for controller and Hamiltonian perturbations versus error for instant-time readout (t controllers) in a 5-ring with nearest-neighbor transfer. The overall trend of the figure confirms the negative trend of the hypothesis test, but the spread of log-sensitivities for a given error is large. For example, the log-sensitivity for controllers with a fidelity error e(T) = 10−5 ranges from as low as 100 to greater than 105. This belies a simple one-parameter relation between log-sensitivity and error, but provides evidence for the existence of controllers with the best of possible properties: good performance and with acceptable robustness. As a second example, we show the plot of ∥ s Δ (ξ μ0,T)∥ C versus e Δ (T) for a 3-ring for nearest-neighbor transfer and time-windowed readout (dt controller) in Figure 2. The plot confirms the positive (non-conventional) trend of the hypothesis test but displays wide variation in log-sensitivities in the vicinity of e Δ (T) = 0.016 from as low as 10−3 upwards to 105. Identification of what factors guarantee the smaller log-sensitivities or prevent the larger values would be highly beneficial in the process of controller design and selection, but remain an open question.
Next, we note the visual depiction of the nearest-neighbor transfers for instantaneous readout controllers with N ≥ 7 in Figure 3. Though the hypothesis test results show rejection of H 0 in favor of H 1+ for these cases, the trend is not readily apparent visually as seen in Figure 3. This can be confirmed by the relatively small values of the Kendall τ and Pearson r for these transfers. However, of greater importance are the variations in the log-sensitivity for a given error seen in the plot, again indicating the possibility of controllers that provide good robustness for acceptable performance.
Finally, we look at the plot of a localization case in Figure 4. We clearly see the contrast in robustness for localization between Hamiltonian uncertainty and controller uncertainty. The strong non-conventional trend between e Δ (T) and ∥ s Δ (ξ μ0,T)∥ C is clearly evident while the slightly negative conventional trend for ∥ s Δ (ξ μ0,T)∥ H is also perceptible. But more important is the nearly constant value of the log-sensitivity for Hamiltonian uncertainty over the range of error, a factor that can likely be exploited to provide some robustness guarantees over large performance ranges in the case of localization.
Conclusion
In this paper, we use a basic hypothesis test to determine the degree by which controllers optimized for coherent excitation transport in quantum rings abide by the limitations implied by classical control, extending the work initiated in Jonckheere et al. (Reference Jonckheere, Schirmer and Langbein2018). In contrast to Jonckheere et al. (Reference Jonckheere, Schirmer and Langbein2018), we extended the analysis to consider not only controllers optimized for time-averaged fidelity, but those optimized for instantaneous readout as well. Furthermore, we included uncertainty in both the controlling bias fields and the spin-couplings. Overall, our results confirm those of Jonckheere et al. (Reference Jonckheere, Schirmer and Langbein2018) in that controllers optimized for readout over a time window exhibit a degree of nonclassical behavior for transfer to spins near the initial spin and regain conventional behavior for transfers between more distant spins. However, while the results by Jonckheere et al. (Reference Jonckheere, Schirmer and Langbein2018) indicate nonconventional trends for the localization cases with Hamiltonian perturbations, using the updated calculations of (17) yields more conventional results based on the Kendall τ and Pearson r hypothesis tests. In the extension of the analysis to controllers optimized for instantaneous readout, we note a strong conventional trend for all spin sizes and transfers, save for the nearest-neighbor transfers of N ≥ 7. Finally, we show that beyond just the hypothesis testing, controllers of both types display widely varying levels of robustness for the same error.
Looking to future work, we need to identify what drives the variation in log-sensitivity for controllers with similar error in order to direct synthesis towards controllers that provide the best robustness properties for a given fidelity requirement. Next, the cause for the differences in the log-sensitivity trends observed for controllers optimized for instantaneous readout versus readout over a time window and transfer to nearest-neighbor and next-nearest-neighbor spins in both types of controllers needs to be clarified. This holds the potential to exploit these properties to navigate around the classically imposed fundamental limitations. Finally, it is necessary to generalize the one-uncertainty-at-a-time nature of the differential sensitivity technique used in this paper to more general methods that account for multiple structured uncertainties or even unstructured uncertainties.
Data availability statement
The data is available at Langbein et al. (Reference Langbein, O’Neil and Shermer2022).
Financial support
Sean O’Neil acknowledges PhD funding from the US Army Advanced Civil Schooling program.
Competing interests
The authors report no conflicts of interest.
Comments
No accompanying comment.