krotov.info_hooks module¶
Routines that can be passed as info_hook to optimize_pulses()
Summary¶
Functions:
Chain multiple info_hook or modify_params_after_iter callables together. |
|
Print full debug information about the current Krotov iteration. |
|
Print a tabular overview of the functional values in the iteration. |
__all__
: chain
, print_debug_information
, print_table
Reference¶
-
krotov.info_hooks.
chain
(*hooks)[source]¶ Chain multiple info_hook or modify_params_after_iter callables together.
Example
>>> def print_fidelity(**kwargs): ... F_re = np.average(np.array(kwargs['tau_vals']).real) ... print(" F = %f" % F_re) >>> info_hook = chain(print_debug_information, print_fidelity)
Note
Functions that are connected via
chain()
may share the same shared_data argument, which they can use to communicate down the chain.
-
krotov.info_hooks.
print_debug_information
(*, objectives, adjoint_objectives, backward_states, forward_states, forward_states0, guess_pulses, optimized_pulses, g_a_integrals, lambda_vals, shape_arrays, fw_states_T, tlist, tau_vals, start_time, stop_time, iteration, info_vals, shared_data, propagator, chi_constructor, mu, sigma, iter_start, iter_stop, out=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>)[source]¶ Print full debug information about the current Krotov iteration.
This routine is intended to be passed to
optimize_pulses()
as info_hook, and it exemplifies the full signature of a routine suitable for this purpose.- Keyword Arguments
adjoint_objectives (list[Objective]) – list of the adjoint objectives
backward_states (list) – If available, for each objective, an array-like object containing the states of the Krotov backward propagation.
forward_states – If available (second order only), for each objective, an array-like object containing the forward-propagated states under the optimized pulses. None otherwise.
forward_states0 – If available (second order only), for each objective, an array-like object containing the forward-propagated states under the guess pulses. None otherwise.
guess_pulses (list[numpy.ndarray]) – list of guess pulses
optimized_pulses (list[numpy.ndarray]) – list of optimized pulses
g_a_integrals (numpy.ndarray) – array of values \(\int_0^T g_a(t) \dd t = \int_0^T \frac{\lambda_a}{S(t)} \Abs{\Delta \epsilon(t)}^2 \dd t\), for each pulse \(\epsilon(t)\). The pulse updates \(\Delta \epsilon(t)\) are the differences of the optimized_pulses and the guess_pulses (zero in the zeroth iteration that only performs a forward-propagation of the guess pulses). The quantity \(\int g_a(t) \dd t\) is a very useful measure of how much the pulse amplitudes changes in each iteration. This tells us whether we’ve chosen good values for \(\lambda_a\). Values that are too small cause “pulse explosions” which immediately show up in \(\int_0^T g_a(t) \dd t\). Also, whether \(\int g_a(t) \dd t\) is increasing or decreasing between iterations gives an indication whether the optimization is “speeding up” or “slowing down”, and thus whether convergence is reached (negligible pulse updates).
lambda_vals (numpy.ndarray) – for each pulse, the value of the \(\lambda_a\) parameter
shape_arrays (list[numpy.ndarray]) – for each pulse, the array of update-shape values \(S(t)\)
fw_states_T (list) – for each objective, the forward-propagated state
tlist (numpy.ndarray) – array of time grid values on which the states are defined
tau_vals (numpy.ndarray) – for each objective, the complex overlap for the target state with the forward-propagated state, or None if no target state is defined.
start_time (float) – The time at which the iteration started, in epoch seconds
stop_time (float) – The time at which the iteration started, in epoch seconds
iteration (int) – The current iteration number. For the initial propagation of the guess controls, zero.
info_vals (list) – List of the return values of the info_hook from previous iterations
shared_data (dict) – Dict of data shared between any modify_params_after_iter and any info_hook functions chained together via
chain()
.propagator (callable or list[callable]) – The propagator function(s) used by
optimize_pulses()
.chi_constructor (callable) – The chi_constructor function used by
optimize_pulses()
.mu (callable) – The mu function used by
optimize_pulses()
.sigma (None or krotov.second_order.Sigma) – The argument passed to
optimize_pulses()
as sigma.iter_start (int) – The formal iteration number at which the optimization started
iter_stop (int) – The maximum iteration number after which the optimization will end.
out – An open file handle where to write the information. This parameter is not part of the info_hook interface, and defaults to stdout. Use
functools.partial()
to pass a different value.
Note
This routine implements the full signature of an info_hook in
optimize_pulses()
, excluding out. However, since the info_hook only allows for keyword arguments, it is usually much simpler to use Python’s variable keyword arguments syntax (**kwargs
). For example, consider the following info_hook that prints (and stores) the value of the real-part gate fidelity:def print_fidelity(**kwargs): F_re = np.average(np.array(kwargs['tau_vals']).real) print(" F = %f" % F_re) return F_re
-
krotov.info_hooks.
print_table
(*, J_T, show_g_a_int_per_pulse=False, J_T_prev=None, unicode=True, col_formats=('%d', '%.2e', '%.2e', '%.2e', '%.2e', '%.2e', '%.2e', '%d'), col_headers=None, out=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>)[source]¶ Print a tabular overview of the functional values in the iteration.
An example output is:
iter. J_T ∫gₐ(t)dt J ΔJ_T ΔJ secs 0 1.00e+00 0.00e+00 1.00e+00 n/a n/a 0 1 7.65e-01 2.33e-02 7.88e-01 -2.35e-01 -2.12e-01 1 2 5.56e-01 2.07e-02 5.77e-01 -2.09e-01 -1.88e-01 1
The table has the following columns:
iteration number
value of the final-time functional \(J_T\)
If show_g_a_int_per_pulse is True and there is more than one control pulse: one column for each pulse, containing the value of \(\int_0^T \frac{\lambda_{a, i}}{S_i(t)} \Abs{\Delta \epsilon_i(t)}^2 \dd t\). No such columns are present in the above example.
The value of \(\sum_i \int_0^T g_a(\epsilon_i(t)) \dd t = \sum_i \int_0^T \frac{\lambda_{a, i}}{S_i(t)} \Abs{\Delta \epsilon_i(t)}^2 \dd t\), or just \(\int_0^T \frac{\lambda_{a}}{S(t)} \Abs{\Delta \epsilon(t)}^2 \dd t\) if there is only a single control pulse (as in the above example output). This value (respectively the individual values with show_g_a_int_per_pulse) should always be at least three orders of magnitude smaller than the pulse fluence \(\sum_i\int_0^T \Abs{\epsilon_i(t)}^2 \dd t\). Larger changes in the pulse amplitude may be a sign of a “pulse explosion” due to values for \(\lambda_{a,i}\) that are too small. Changes in \(\sum_i \int_0^T \frac{\lambda_{a, i}}{S_i(t)}\) are often a better indicator of whether the optimization is “speeding up”/”slowing down”/reaching convergence than the values of \(J_T\).
The value of the total functional \(J = J_T + \sum_i \int_0^T g_a(\epsilon_i(t)) \dd t\)
The change \(\Delta J_T\) in the final time functional compared to the previous iteration. This should be a negative value, indicating monotonic convergence in a minimization of \(J_T\).
The change \(\Delta J\) in the total functional compared to the previous iteration. This is evaluated as \(\Delta J = \Delta J_T + \sum_i \int_0^T g_a(\epsilon_i(t)) \dd t\). Somewhat counter-intuitively, \(\Delta J\) does not contain a contribution from the \(g_a(t)\) of the previous iteration. This is because the \(\Delta \epsilon_i(T)\) on which \(g_a(t)\) depends must be evaluated with respect to the same reference field (the guess pulse of the current iteration), to that $Delta epsilon_i(T) = 0$ when evaluated with the optimized pulse of the previous iteration (i.e., the same guess pulse of the current iteration).
The number of seconds in wallclock time spent on the iteration
After the last column, an indicator
*
or**
may be shown if there is a loss of monotonic convergence in \(\Delta J_T\) and/or \(\Delta J\). Krotov’s method mathematically guarantees a negative \(\Delta J\) in the continuous limit. Assuming there are no errors in the time propagation, or in the chi_constructor passed tooptimize_pulses()
, a loss of monotonic convergence is due to the \(\lambda_a\) associated with the pulses (via pulse_options inoptimize_pulses()
) being too small. In practice, we usually don’t care too much about a loss of monotonic convergence in \(\Delta J\), but a loss of convergence in \(\Delta J_T\) is a serious sign of trouble. It is often associated with sharp discontinuous spikes in the optimized pulses, or a dramatic increase in the pulse amplitude.- Parameters
J_T (callable) – A function that extracts the value of the final time functional from the keyword-arguments passed to the info_hook.
show_g_a_int_per_pulse (bool) – If True, print a column with the value of \(\int_0^T g_a(\epsilon_i(t)) \dd t = \int_0^T \frac{\lambda_{a, i}}{S_i(t)} \Abs{\Delta \epsilon_i(t)}^2 \dd t\) for every pulse \(\epsilon_i(t)\). Otherwise, only print the sum over those integrals for all pulses.
J_T_prev (None or callable) – A function that extracts the value of the final time functional from the previous iteration. If None, use the last values from the info_vals passed to the info_hook.
unicode (bool) – Whether to use unicode symbols for the column headers. Some systems have broken monospace fonts in the Jupyter notebook that cause the headers not to line up as intended. No effect if col_headers is given.
col_formats (tuple) – Tuple of exactly 8 percent-format strings for each column of values in the table (see items 1-8 above). These must each format a single value (an integer for the first and last column, and a float for all other columns).
col_headers (None or tuple) – A tuple of exactly 8 strings that will be used for column headers (see items 1-8 above). If None, default values depending on unicode will be used. The third element (“
lbl
”) of the tuple must supportlbl.format(l=l)
for an integerl
(the one-based index of the control; since there will be one column for each control).out – An open file handle where to write the table. Defaults to stdout.
The widths of the columns are automatically determined both from the length of the column headers and the length of the formatted values.
- Raises
ValueError – If col_formats and/or col_headers are of the wrong length, type, or invalid format.