How-Tos¶
How to optimize towards a quantum gate¶
To optimize towards a quantum gate \(\Op{O}\) in a closed quantum system,
set one Objective
for the states in the logical basis, with the basis
state \(\ket{\Psi_k}\) as the initial_state
and
\(\Op{O} \ket{\Psi_k}\) as the target
.
You may use krotov.gate_objectives()
to construct the appropriate list of objectives. See the
Optimization of an X-Gate for a Transmon Qubit for an example. For more
advanced gate optimizations, also see How to optimize towards a two-qubit gate up to single-qubit corrections,
How to optimize towards an arbitrary perfect entangler, How to optimize in a dissipative system, and
How to optimize for robust pulses.
How to optimize complex control fields¶
This implementation of Krotov’s method requires real-valued control fields. You must rewrite your Hamiltonian to contain the real part and the imaginary part of the field as two independent controls. This is always possible. For example, for a driven harmonic oscillator in the rotating wave approximation, the interaction Hamiltonian is given by
where \(\epsilon_{\text{re}}(t)= \Re[\epsilon(t)]\) and \(\epsilon_{\text{im}}(t) = \Im[\epsilon(t)]\) are considered as two independent (real-valued) controls.
See the Optimization of a state-to-state transfer in a lambda system with RWA for an example.
How to exclude a control from the optimization¶
In order to force the optimization to leave any particular control field
unchanged, set its update shape to krotov.shapes.zero_shape()
in the pulse_options that you pass to optimize_pulses()
.
How to define a new optimization functional¶
In order to define a new optimization functional \(J_T\):
Decide on what should go in
Objective.target
to best describe the physical control target. If the control target is reached when theObjective.initial_state
evolves to a specific target state under the optimal control fields, that target state should be included intarget
.Define a function chi_constructor that calculates the boundary condition for the backward-propagation in Krotov’s method,
\[\ket{\chi_k(T)} \equiv - \left. \frac{\partial J_T}{\partial \bra{\phi_k(T)}} \right\vert_{\ket{\phi_k(T)}}\,,\]or the equivalent experession in Liouville space. This function should calculate the states \(\ket{\chi_k}\) based on the forward-propagated states \(\ket{\phi_k(T)}\) and the list of objectives. For convenience, when
target
contains a target state, chi_constructor will also receive tau_vals containing the overlaps \(\tau_k = \Braket{\phi_k(T)}{\phi_k^{\tgt}}\). Seechis_re()
for an example.Optionally, define a function that can be used as an info_hook in
optimize_pulses()
which returns the value \(J_T\). This is not required to run an optimization since the functional is entirely implicit in chi_constructor. However, calculating the value of the functional is useful for convergence analysis (check_convergence inoptimize_pulses()
)
See krotov.functionals
for some standard functionals. An example for a
more advanced functional is the Optimization towards a Perfect Entangler.
How to penalize population in a forbidden subspace¶
In principle, optimize_pulses()
has a state_dependent_constraint.
However, this has some caveats. Most notably, it results in an inhomogeneous
equation of motion, which is currently not implemented.
The recommended “workaround” is to place artificially high dissipation on the levels in the forbidden subspace. A non-Hermitian Hamiltonian is usually a good way to realize this. See the Optimization of a dissipative state-to-state transfer in a Lambda system for an example.
How to optimize towards a two-qubit gate up to single-qubit corrections¶
Use krotov.objectives.gate_objectives()
with local_invariants=True
in
order to construct a list of objectives suitable for an optimization using a
“local-invariant functional” [MullerPRA11]. This optimizes towards a
point in the Weyl chamber.
The weylchamber
package contains the suitable chi_constructor routines to
pass to optimize_pulses()
.
How to optimize towards an arbitrary perfect entangler¶
Closely releated to an optimization towards a point in the Weyl chamber is the optimizatin towards an arbitrary perfectly entangling two-qubit gate. Geometrically, this means optimizing towards the polyhedron of perfect entanglers in the Weyl chamber.
Use krotov.objectives.gate_objectives()
with gate='PE'
in
order to construct a list of objectives suitable for an optimization using a
“perfect entanglers” functional [WattsPRA2015][GoerzPRA2015].
This is illustrated in the Optimization towards a Perfect Entangler.
Again, the chi_constructor is available in the weylchamber
package.
How to optimize in a dissipative system¶
To optimize a dissipative system, it is sufficient to set an Objective
with a density matrix for the initial_state
and
target
, and a Liouvillian in Objective.H
.
See the Optimization of Dissipative Qubit Reset for an
example.
Instead of a Liouvillian, it is also possible to set Objective.H
to
the system Hamiltonian, and Objective.c_ops
to the appropriate
Lindblad operators. However, it is generally much more efficient to use
krotov.objectives.liouvillian()
to convert a time-dependent Hamiltonian
and a list of Lindblad operators into a time-dependent Liouvillian. In either
case, the propagate routine passed to optimize_pulses()
must be aware of and compatible with the convention for the objectives.
Specifically for gate optimization, the routine
gate_objectives()
can be used to automatically set appropriate objectives for an optimization in
Liouville space. The parameter liouville_states_set indicates that the system
dynamics are in Liouville space and sets an appropriate choice of matrices that
track the optimization according to Ref. [GoerzNJP2014].
See the Optimization of a Dissipative Quantum Gate for an example.
For weak dissipation, it may also be possible to avoid the use of density matrices altogether, and to instead use a non-Hermitian Hamiltonian. For example, you may use the effective Hamiltonian from the MCWF method [PlenioRMP1998],
for the Hermitian Hamiltonian \(\Op{H}\) and the Lindblad operators \(\Op{L}_k\). Propagating \(\Op{H}_{\text{eff}}\) (without quantum jumps) will lead to a decay in the norm of the state corresponding to how much dissipation the state is subjected to. Numerically, this will usually increase the value of the optimization functional (that is, the error). Thus the optimization can be pushed towards avoiding decoherence, without explicitly performing the optimization in Liouville space. See the Optimization of a dissipative state-to-state transfer in a Lambda system for an example.
How to optimize for robust pulses¶
Control pulses can be made robust with respect to variations in the system by doing an ensemble optimization, as proposed in Ref. [GoerzPRA2014]. The idea if to sample a representative selection of possible system Hamiltonians, and to optimize over an average of the entire ensemble.
An appropriate set of objectives can be generated with the
ensemble_objectives()
function.
How to parallelize the optimization¶
Krotov’s method is inherently parallel accross different objectives. See
krotov.parallelization
, and the
Optimization of an X-Gate for a Transmon Qubit for an example.
How to maximize numerical efficiency¶
For systems of non-trivial size, the main numerical effort should be in the
simulation of the system dynamics. Every iteration of Krotov’s method requires
a full backward propagation and a full forward propagation of the states associated with each
objective. Therefore, the best numerical efficiency can be achieved by
optimizing the performance of the propagator that is passed to
optimize_pulses()
.
One possibility is to implement problem-specific propagators, such as
krotov.propagators.DensityMatrixODEPropagator
. Going further, you
might consider implementing the propagator with the help of lower-level instructions, e.g.,
by using Cython.
How to deal with the optimization running out of memory¶
Krotov’s method requires the storage of at least one set of propagated state over the entire time grid, for each objective. For the second-order update equation, up to three sets of stored states per objective may be required. In particular for larger systems and dynamics in Liouville space, the memory required for storing these states may be prohibitively expensive.
The optimize_pulses()
accepts a storage parameter
to which a constructor for an array-like container can be passed wherein the
propagated states will be stored. It is possible to pass custom out-of-memory
storage objects, such as Dask arrays. This may carry a significant penalty in
runtime, however, as states will have to be read from disk, or across the
network.