Quantum information science has changed our perception of quantum physics from passive understanding to a source of technological advances . By way of actively exploiting the two essential elements of quantum physics, coherence and entanglement, technologies such as quantum computing  or quantum sensing  hold the promise for solving computationally hard problems or reaching unprecedented sensitivity. These technologies rely on the ability to accurately perform quantum operations for increasingly complex quantum systems. Quantum optimal control allows to address this challenge by providing a set of tools to devise and implement shapes of external fields that accomplish a given task in the best way possible . Originally developed in the context of molecular physics  and nuclear magnetic resonance , quantum optimal control theory has been adapted to the specific needs of quantum information science in recent years . Calculation of optimized external field shapes for tasks such as state preparation or quantum gate implementation have thus become standard , even for large Hilbert space dimensions as encountered in e.g. Rydberg atoms . Experimental implementation of the calculated field shapes, using arbitrary waveform generators, has been eased by the latter becoming available commercially. Successful demonstration of quantum operations in various experiments  attests to the level of maturity that quantum optimal control in quantum technologies has reached.
In order to calculate optimized external field shapes, two choices need to be made – about the optimization functional and about the optimization method. The functional consists of the desired figure of merit, such as a gate or state preparation error, as well as additional constraints, such as amplitude or bandwidth restrictions . Optimal control methods in general can be classified into gradient-free and gradient-based algorithms that either evaluate the optimization functional alone or together with its gradient . Gradient-based methods typically converge faster, unless the number of optimization parameters can be kept small. Most gradient-based methods rely on the iterative solution of a set of coupled equations that include forward propagation of initial states, backward propagation of adjoint states, and the control update . A popular representative of concurrent update methods is GRadient Ascent Pulse Engineering (GRAPE) . Krotov’s method, in contrast, requires sequential updates . This comes with the advantage of guaranteed monotonic convergence and obviates the need for a line search in the direction of the gradient .
The choice of Python as an implementation language is due to Python’s
easy-to-learn syntax, expressiveness, and immense popularity in the
scientific community. Moreover, the QuTiP library exists,
providing a general purpose tool to numerically describe quantum systems
and their dynamics. QuTiP already includes basic versions of other
popular quantum control algorithms such as GRAPE and the gradient-free
CRAB . The Jupyter notebook framework is available to provide an ideal
platform for the interactive exploration of the
capabilities, and to facilitate reproducible research workflows.
krotov package targets both students wishing to
enter the field of quantum optimal control, and researchers in the
field. By providing a comprehensive set of Examples, we
enable users of
our package to explore the formulation of typical control problems, and
to understand how Krotov’s method can solve them. These examples are
inspired by recent
and thus show the use of the method in the purview of current research.
In particular, the package is not restricted to closed quantum systems,
but can fully address open system dynamics, and thus aide in the
development of Noisy Intermediate-Scale Quantum (NISQ)
technology . Optimal control is also
increasingly important in the design of
and we hope that the availability of an easy-to-use implementation of
Krotov’s method will facilitate this further.
Large Hilbert space
and open quantum systems  in particular
require considerable numerical effort to optimize. Compared to the
Fortran and C/C++ languages traditionally used for scientific computing,
and more recently Julia , pure
Python code usually performs slower by two to three orders of
magnitude . Thus, for
hard optimization problems that require several thousand iterations to
converge, the Python implementation provided by the
may not be sufficiently fast. In this case, it may be desirable to
implement the entire optimization and time propagation in a single, more
efficient (compiled) language. Our Python implementation of Krotov’s
method puts an emphasis on clarity, and the documentation provides
detailed explanations of all necessary concepts, especially the correct
Time discretization and the possibility to parallelize the optimization.
krotov package can serve as a reference implementation,
leveraging Python’s reputation as “executable pseudocode”, and as a foundation
against which to test other implementations.