krotov.convergence module¶
Routines for check_convergence in krotov.optimize.optimize_pulses()
A check_convergence function may be used to determine whether an optimization
is converged, and thus can be stopped before the maximum number of
iterations (iter_stop) is reached. A function suitable for
check_convergence must receive a Result
object, and return a value
that evaluates as True or False in a Boolean context, indicating whether the
optimization has converged or not.
The Result
object that the check_convergence function receives as
an argument will be up-to-date for the current iteration. That is, it will
already contain the current values from optimize_pulses()
’s info_hook
in Result.info_vals
, the current tau_vals
, etc. The
Result.optimized_controls
attribute will contain the current optimized
pulses (defined on the intervals of tlist
).
The check_convergence function must not modify the Result
object it
receives in any way. The proper place for custom modifications after each
iteration in optimize_pulses()
is through the modify_params_after_iter
routine (e.g., dynamically adjusting λₐ if convergence is too slow or pulse
updates are too large).
It is recommended that a check_convergence function returns None (which is
False in a Boolean context) if the optimization has not yet converged. If the
optimization has converged, check_convergence should return a message string
(which is True in a Boolean context). The returned string will be included in
the final Result.message
.
A typical usage for check_convergence is ending the optimization when the
error falls below a specified limit. Such a check_convergence function can be
generated by value_below()
. Often, this “error” is the value of the
functional \(J_T\). However, it is up to the user to ensure that the
explicit value of \(J_T\) can be calculated; \(J_T\) in Krotov’s method
is completely implicit, and enters the optimization only indirectly via the
chi_constructor passed to optimize_pulses()
. A specific
chi_constructor implies the minimization of the functional \(J_T\) from
which chi_constructor was derived. A convergence check based on the
explicit value of \(J_T\) can be realized by passing an info_hook that
returns the value of \(J_T\). This value is then stored in
Result.info_vals
, which is where value_below()
looks for it.
An info_hook could also calculate and return an arbitrary measure of
success, not related to \(J_T\) (e.g. a fidelity, or a concurrence).
Since we expect the optimization (the minimization of \(J_T\)) to maximize
a fidelity, a convergence check might want to look at whether the calculated
value is above some threshold. This can be done via value_above()
.
In addition to looking at the value of some figure of merit, one might want
stop the optimization when there is an insufficient improvement between
iterations. The delta_below()
function generates a check_convergence
function for this purpose. Multiple convergence conditions (“stop optimization
when \(J_T\) reaches \(10^{-5}\), or if \(\Delta J_T < 10^{-6}\)”)
can be defined via Or()
.
While Krotov’s method is guaranteed to monotonically converge in the continuous
limit, this no longer strictly holds when time is discretized (in particular if
λₐ is too small). You can use check_monotonic_error()
or
check_monotonic_fidelity()
as a check_convergence function that stops
the optimization when monotonic convergence is lost.
The check_convergence routine may also be used to store the current state of
the optimization to disk, as a side effect. This is achieved by the routine
dump_result()
, which can be chained with other convergence checks with
Or()
. Dumping the current state of the optimization at regular intervals
protects against losing the results of a long running optimization in the event
of a crash.
Summary¶
Functions:
Chain multiple check_convergence functions together in a logical Or. |
|
Check for monotonic convergence with respect to the error |
|
Check for monotonic convergence with respect to the fidelity |
|
Constructor for a routine that checks if \(\Abs{v_1 - v_0} < \varepsilon\) |
|
Return a function for dumping the result every so many iterations |
|
Constructor for routine that checks if a value is above limit |
|
Constructor for routine that checks if a value is below limit |
__all__
: Or
, check_monotonic_error
, check_monotonic_fidelity
, delta_below
, dump_result
, value_above
, value_below
Reference¶
-
krotov.convergence.
Or
(*funcs)[source]¶ Chain multiple check_convergence functions together in a logical Or.
Each parameter must be a function suitable to pass to
optimize_pulses()
as check_convergence. It must receive aResult
object and should return None or a string message.- Returns
A function
check_convergence(result)
that returns the result of the first “non-passing” function in *funcs. A “non-passing” result is one that evaluates to True in a Boolean context (should be a string message)- Return type
callable
-
krotov.convergence.
value_below
(limit, spec=('info_vals', T[- 1]), name=None, **kwargs)[source]¶ Constructor for routine that checks if a value is below limit
- Parameters
limit (float or str) – A float value (or str-representation of a float) against which to compare the value extracted from
Result
spec – A specification of the
Result
attribute from which to extract the value to compare against limit. Defaults to a specification extracting the last value inResult.info_vals
(returned by the info_hook passed tooptimize_pulses()
). This should be some kind of error measure, e.g., the value of the functional \(J_T\) that is being minimized.name (str or None) – A name identifying the checked value, used for the message returned by the check_convergence routine. Defaults to
str(spec)
.**kwargs – Keyword arguments to pass to
glom()
(see Note)
- Returns
A function
check_convergence(result)
that extracts the value specified by spec from theResult
object, and checks it against limit. If the value is below the limit, it returns an appropriate message string. Otherwise, it returns None.- Return type
callable
Note
The spec can be a callable that receives
Result
and returns the value to check against the limit. You should also pass a name like ‘J_T’, or ‘error’ as a label for the value. For more advanced use cases, spec can be aglom()
-specification that extracts the value to check from theResult
object asglom.glom(result, spec, **kwargs)
.Example
>>> check_convergence = value_below( ... limit='1e-4', ... spec=lambda r: r.info_vals[-1], # same as the default spec ... name='J_T' ... ) >>> r = krotov.result.Result() >>> r.info_vals.append(1e-4) >>> check_convergence(r) # returns None >>> r.info_vals.append(9e-5) >>> check_convergence(r) 'J_T < 1e-4'
-
krotov.convergence.
value_above
(limit, spec=('info_vals', T[- 1]), name=None, **kwargs)[source]¶ Constructor for routine that checks if a value is above limit
Like
value_below()
, but for checking whether an extracted value is above, not below a value. By default, it looks at the last value inResult.info_vals
, under the assumption that the info_hook passed tooptimize_pulses()
returns some figure of merit we expect to be maximized, like a fidelity. Note that an info_hook is free to return an arbitrary value, not necessarily the value of the functional \(J_T\) that the optimization is minimizing (specified implicitly via the chi_constructor argument tooptimize_pulses()
).Example
>>> check_convergence = value_above( ... limit='0.999', ... spec=lambda r: r.info_vals[-1], ... name='Fidelity' ... ) >>> r = krotov.result.Result() >>> r.info_vals.append(0.9) >>> check_convergence(r) # returns None >>> r.info_vals.append(1 - 1e-6) >>> check_convergence(r) 'Fidelity > 0.999'
-
krotov.convergence.
delta_below
(limit, spec1=('info_vals', T[- 1]), spec0=('info_vals', T[- 2]), absolute_value=True, name=None, **kwargs)[source]¶ Constructor for a routine that checks if \(\Abs{v_1 - v_0} < \varepsilon\)
- Parameters
limit (float or str) – A float value (or str-representation of a float) for \(\varepsilon\)
spec1 – A
glom()
specification of theResult
attribute from which to extract \(v_1\). Defaults to a spec extracting the last value inResult.info_vals
.spec0 – A
glom()
specification of theResult
attribute from which to extract \(v_0\). Defaults to a spec extracting the last-but-one value inResult.info_vals
.absolute_value (bool) – If False, check for \(v_1 - v_0 < \varepsilon\), instead of the absolute value.
name (str or None) – A name identifying the delta, used for the message returned by the check_convergence routine. Defaults to
"Δ({spec1},{spec0}"
.**kwargs – Keyword arguments to pass to
glom()
Note
You can use
delta_below()
to implement a check for strict monotonic convergence, e.g. when info_hook returns the optimization error, by flipping spec0 and spec1, setting limit to zero, and setting absolute_value to False. Seecheck_monotonic_error()
.Example
>>> check_convergence = delta_below(limit='1e-4', name='ΔJ_T') >>> r = krotov.result.Result() >>> r.info_vals.append(9e-1) >>> check_convergence(r) # None >>> r.info_vals.append(1e-1) >>> check_convergence(r) # None >>> r.info_vals.append(4e-4) >>> check_convergence(r) # None >>> r.info_vals.append(2e-4) >>> check_convergence(r) # None >>> r.info_vals.append(1e-6) >>> check_convergence(r) # None >>> r.info_vals.append(1e-7) >>> check_convergence(r) 'ΔJ_T < 1e-4'
-
krotov.convergence.
check_monotonic_error
(result)[source]¶ Check for monotonic convergence with respect to the error
Check that the last value in
Result.info_vals
is smaller than the last-but-one value. If yes, return None. If no, return an appropriate error message.This assumes that the info_hook passed to
optimize_pulses()
returns the value of the functional \(J_T\) (or another quantity that we expect to be minimized), which is then available inResult.info_vals
.Example
>>> r = krotov.result.Result() >>> r.info_vals.append(9e-1) >>> check_monotonic_error(r) # None >>> r.info_vals.append(1e-1) >>> check_monotonic_error(r) # None >>> r.info_vals.append(2e-1) >>> check_monotonic_error(r) 'Loss of monotonic convergence; error decrease < 0'
See also
Use
check_monotonic_fidelity()
for when info_hook returns a “fidelity”, that is, a measure that should increase in each iteration.
-
krotov.convergence.
check_monotonic_fidelity
(result)[source]¶ Check for monotonic convergence with respect to the fidelity
This is like
check_monotonic_error()
, but looking for a monotonic increase in the values inResult.info_vals
. Thus, it is assumed that the info_hook returns a fidelity (to be maximized), not an error (like \(J_T\), to be minimized).Example
>>> r = krotov.result.Result() >>> r.info_vals.append(0.0) >>> check_monotonic_fidelity(r) # None >>> r.info_vals.append(0.2) >>> check_monotonic_fidelity(r) # None >>> r.info_vals.append(0.15) >>> check_monotonic_fidelity(r) 'Loss of monotonic convergence; fidelity increase < 0'
-
krotov.convergence.
dump_result
(filename, every=10)[source]¶ Return a function for dumping the result every so many iterations
For long-running optimizations, it can be useful to dump the current state of the optimization every once in a while, so that the result is not lost in the event of a crash or unexpected shutdown. This function returns a routine that can be passed as a check_convergence routine that does nothing except to dump the current
Result
object to a file (cf.Result.dump()
). Failure to write the dump file stops the optimization.- Parameters
filename (str) – Name of file to dump to. This may include a field
{iter}
which will be formatted with the most recent iteration number, viastr.format()
. Existing files will be overwritten.
Note
Choose every so that dumping does not happen more than once every few minutes, at most. Dumping after every single iteration may slow down the optimization due to I/O overhead.
Examples
dump every 10 iterations to the same file oct_result.dump:
>>> check_convergence = dump_result('oct_result.dump')
dump every 100 iterations to files
oct_result_000100.dump
,oct_result_000200.dump
, etc.:>>> check_convergence = dump_result( ... 'oct_result_{iter:06d}.dump', every=100)