inferno.trainers.callbacks package

Submodules

inferno.trainers.callbacks.base module

class inferno.trainers.callbacks.base.Callback[source]

Bases: object

Recommended (but not required) base class for callbacks.

bind_trainer(trainer)[source]
debug_print(message)[source]
get_config()[source]
classmethod get_instances()[source]
classmethod register_instance(instance)[source]
set_config(config_dict)[source]
toggle_debug()[source]
trainer
unbind_trainer()[source]
class inferno.trainers.callbacks.base.CallbackEngine[source]

Bases: object

Gathers and manages callbacks.

Callbacks are callables which are to be called by trainers when certain events (‘triggers’) occur. They could be any callable object, but if endowed with a bind_trainer method, it’s called when the callback is registered. It is recommended that callbacks (or their __call__ methods) use the double-star syntax for keyword arguments.

BEGIN_OF_EPOCH = 'begin_of_epoch'
BEGIN_OF_FIT = 'begin_of_fit'
BEGIN_OF_SAVE = 'begin_of_save'
BEGIN_OF_TRAINING_ITERATION = 'begin_of_training_iteration'
BEGIN_OF_TRAINING_RUN = 'begin_of_training_run'
BEGIN_OF_VALIDATION_ITERATION = 'begin_of_validation_iteration'
BEGIN_OF_VALIDATION_RUN = 'begin_of_validation_run'
END_OF_EPOCH = 'end_of_epoch'
END_OF_FIT = 'end_of_fit'
END_OF_SAVE = 'end_of_save'
END_OF_TRAINING_ITERATION = 'end_of_training_iteration'
END_OF_TRAINING_RUN = 'end_of_training_run'
END_OF_VALIDATION_ITERATION = 'end_of_validation_iteration'
END_OF_VALIDATION_RUN = 'end_of_validation_run'
TRIGGERS = {'begin_of_training_iteration', 'end_of_training_run', 'begin_of_save', 'begin_of_epoch', 'begin_of_training_run', 'begin_of_validation_run', 'end_of_validation_iteration', 'end_of_save', 'begin_of_validation_iteration', 'begin_of_fit', 'end_of_fit', 'end_of_training_iteration', 'end_of_validation_run', 'end_of_epoch'}
bind_trainer(trainer)[source]
call(trigger, **kwargs)[source]
get_config()[source]
rebind_trainer_to_all_callbacks()[source]
register_callback(callback, trigger='auto', bind_trainer=True)[source]
register_new_trigger(trigger_name)[source]
set_config(config_dict)[source]
trainer_is_bound
unbind_trainer()[source]

inferno.trainers.callbacks.essentials module

class inferno.trainers.callbacks.essentials.DumpHDF5Every(frequency, to_directory, filename_template='dump.{mode}.epoch{epoch_count}.iteration{iteration_count}.h5', force_dump=False, dump_after_every_validation_run=False)[source]

Bases: inferno.trainers.callbacks.base.Callback

Dumps intermediate training states to a HDF5 file.

add_to_dump_cache(key, value)[source]
clear_dump_cache()[source]
dump(mode)[source]
dump_every
dump_now
dump_state(key, dump_while='training')[source]
dump_states(keys, dump_while='training')[source]
end_of_training_iteration(**_)[source]
end_of_validation_run(**_)[source]
get_file_path(mode)[source]
class inferno.trainers.callbacks.essentials.NaNDetector[source]

Bases: inferno.trainers.callbacks.base.Callback

end_of_training_iteration(**_)[source]
class inferno.trainers.callbacks.essentials.ParameterEMA(momentum)[source]

Bases: inferno.trainers.callbacks.base.Callback

Maintain a moving average of network parameters.

apply()[source]
end_of_training_iteration(**_)[source]
maintain()[source]
class inferno.trainers.callbacks.essentials.PersistentSave(template='checkpoint.pytorch.epoch{epoch_count}.iteration{iteration_count}')[source]

Bases: inferno.trainers.callbacks.base.Callback

begin_of_save(**kwargs)[source]
end_of_save(save_to_directory, **_)[source]
class inferno.trainers.callbacks.essentials.SaveAtBestValidationScore(smoothness=0, verbose=False)[source]

Bases: inferno.trainers.callbacks.base.Callback

Triggers a save at the best EMA (exponential moving average) validation score. The basic Trainer has built in support for saving at the best validation score, but this callback might eventually replace that functionality.

end_of_validation_run(**_)[source]

inferno.trainers.callbacks.scheduling module

class inferno.trainers.callbacks.scheduling.AutoLR(factor, patience, required_minimum_relative_improvement=0, consider_improvement_with_respect_to='best', cooldown_duration=None, monitor='auto', monitor_momentum=0, monitor_while='auto', exclude_param_groups=None, verbose=False)[source]

Bases: inferno.trainers.callbacks.scheduling._Scheduler

Callback to decay or hike the learning rate automatically when a specified monitor stops improving.

The monitor should be decreasing, i.e. lower value –> better performance.

cooldown_duration
decay()[source]
duration_since_last_decay
duration_since_last_improvment
end_of_training_iteration(**_)[source]
end_of_validation_run(**_)[source]
in_cooldown
static is_significantly_less_than(x, y, min_relative_delta)[source]
maintain_monitor_moving_average()[source]
monitor_value_has_significantly_improved
out_of_patience
patience
class inferno.trainers.callbacks.scheduling.AutoLRDecay(factor, patience, required_minimum_relative_improvement=0, consider_improvement_with_respect_to='best', cooldown_duration=None, monitor='auto', monitor_momentum=0, monitor_while='auto', exclude_param_groups=None, verbose=False)[source]

Bases: inferno.trainers.callbacks.scheduling.AutoLR

Callback to decay the learning rate automatically when a specified monitor stops improving.

The monitor should be decreasing, i.e. lower value –> better performance.

class inferno.trainers.callbacks.scheduling.DecaySpec(duration, factor)[source]

Bases: object

A class to specify when to decay (or hike) LR and by what factor.

classmethod build_from(args)[source]
match(iteration_count=None, epoch_count=None, when_equal_return=True)[source]
new()[source]
class inferno.trainers.callbacks.scheduling.ManualLR(decay_specs, exclude_param_groups=None)[source]

Bases: inferno.trainers.callbacks.base.Callback

decay(factor)[source]
end_of_training_iteration(**_)[source]
match()[source]

Module contents