wbia_cnn.models package

Submodules

wbia_cnn.models._model_legacy module

class wbia_cnn.models._model_legacy._ModelLegacy[source]

Bases: object

contains old functions for backwards compatibility that may be eventually be depricated

_fix_center_mean_std()[source]
load_old_weights_kw(old_weights_fpath)[source]
load_old_weights_kw2(old_weights_fpath)[source]
rrr(verbose=True, reload_module=True)

special class reloading function This function is often injected as rrr of classes

wbia_cnn.models.abstract_models module

Directory structure of training

The network directory is the root of the structure and is typically in _ibeis_cache/nets for ibeis databases. Otherwise it it custom defined (like in .cache/wbia_cnn/training for mnist tests)

# era=(group of epochs)

|– netdir <training_dpath>

Datasets contain ingested data packed into a single file for quick loading. Data can be presplit into testing / learning / validation sets. Metadata is always a dictionary where keys specify columns and each item corresponds a row of data. Non-corresponding metadata is currently not supported, but should probably be located in a manifest.json file.

# TODO: what is the same data has tasks that use different labels? # need to incorporate that structure.

The model directory must keep track of several things:
  • The network architecture (which may depend on the dataset being used)
    • input / output shape

    • network layers

  • The state of learning
    • epoch/era number

    • learning rate

    • regularization rate

  • diagnostic information
    • graphs of loss / error rates

    • images of convolutional weights

    • other visualizations

The trained model keeps track of the trained weights and is now independant of the dataset. Finalized weights should be copied to and loaded from here.

| | |– full
| | | |– {dataset_id}_data.pkl
| | | |– {dataset_id}_labels.pkl
| | | |– {dataset_id}_labels_{task1}.pkl?
| | | |– {dataset_id}_labels_{task2}.pkl?
| | | |– {dataset_id}_metadata.pkl
| | |– splits
| | | |– {split_id}_{num} *
| | | | |– {dataset_id}_{split_id}_data.pkl
| | | | |– {dataset_id}_{split_id}_labels.pkl
| | | | |– {dataset_id}_{split_id}_metadata.pkl
| | |– models
| | | |– arch_{archid} *
| | | | |– best_results
| | | | | |– model_state.pkl
| | | | |– checkpoints
| | | | | |– {history_id} *
| | | | | | |– model_history.pkl
| | | | | | |– model_state.pkl
| | | | |– progress
| | | | | |– <latest>
| | | | |– diagnostics
| | | | | |– {history_id} *
| | | | | | |– <files>
| |– trained_models
| | |– arch_{archid} *

class wbia_cnn.models.abstract_models.AbstractCategoricalModel(**kwargs)[source]

Bases: wbia_cnn.models.abstract_models.BaseModel

base model for catagory classifiers

custom_labeled_outputs(network_output, y_batch)[source]

override in inherited subclass to enable custom symbolic expressions based on the network output and the labels

custom_unlabeled_outputs(network_output)[source]

override in inherited subclass to enable custom symbolic expressions based on the network output alone

init_encoder(labels)[source]
loss_function(network_output, truth)[source]
rrr(verbose=True, reload_module=True)

special class reloading function This function is often injected as rrr of classes

class wbia_cnn.models.abstract_models.AbstractVectorModel(**kwargs)[source]

Bases: wbia_cnn.models.abstract_models.BaseModel

base model for catagory classifiers

custom_labeled_outputs(network_output, y_batch)[source]

override in inherited subclass to enable custom symbolic expressions based on the network output and the labels

custom_unlabeled_outputs(network_output)[source]

override in inherited subclass to enable custom symbolic expressions based on the network output alone

init_output_dims(labels)[source]
loss_function(network_output, truth)[source]
rrr(verbose=True, reload_module=True)

special class reloading function This function is often injected as rrr of classes

class wbia_cnn.models.abstract_models.AbstractVectorVectorModel(**kwargs)[source]

Bases: wbia_cnn.models.abstract_models.AbstractVectorModel

base model for catagory classifiers

custom_labeled_outputs(network_output, y_batch)[source]

override in inherited subclass to enable custom symbolic expressions based on the network output and the labels

rrr(verbose=True, reload_module=True)

special class reloading function This function is often injected as rrr of classes

class wbia_cnn.models.abstract_models.BaseModel(**kwargs)[source]

Bases: wbia_cnn.models._model_legacy._ModelLegacy, wbia_cnn.models.abstract_models._ModelVisualization, wbia_cnn.models.abstract_models._ModelIO, wbia_cnn.models.abstract_models._ModelStrings, wbia_cnn.models.abstract_models._ModelIDs, wbia_cnn.models.abstract_models._ModelBackend, wbia_cnn.models.abstract_models._ModelFitter, wbia_cnn.models.abstract_models._ModelPredicter, wbia_cnn.models.abstract_models._ModelBatch, wbia_cnn.models.abstract_models._ModelUtility, utool.util_dev.NiceRepr

Abstract model providing functionality for all other models to derive from

_init_shape_vars(kwargs)[source]
augment(Xb, yb)[source]
init_arch()[source]
init_from_json(fpath)[source]

fpath = ut.truepath(‘~/Desktop/manually_saved/arch_injur-shark-resnet_o2_d27_c2942_jzuddodd/model_state_arch_jzuddodd.pkl’)

property input_batchsize
property input_channels
property input_height
property input_width
loss_function(network_output, truth)[source]
reinit_weights(W=None)[source]

initailizes weights after the architecture has been defined.

rrr(verbose=True, reload_module=True)

special class reloading function This function is often injected as rrr of classes

class wbia_cnn.models.abstract_models.History[source]

Bases: utool.util_dev.NiceRepr

Manages bookkeeping for training history

_new_era(model, X_learn, y_learn, X_valid, y_valid)[source]

Used to denote a change in hyperparameters during training.

_record_epoch(epoch_info)[source]

Records an epoch in an era.

property current_era_size
classmethod from_oldstyle(era_history)[source]
get_history_hashid()[source]

Builds a hashid that uniquely identifies the architecture and the training procedure this model has gone through to produce the current architecture weights.

get_history_nice()[source]
grouped_epochs()[source]
grouped_epochsT()[source]
property hist_id
CommandLine:

python -m wbia_cnn.models.abstract_models –test-History.hist_id:0

Example

>>> # ENABLE_DOCTEST
>>> from wbia_cnn.models.abstract_models import *  # NOQA
>>> model = testdata_model_with_history()
>>> history = model.history
>>> result = str(model.history.hist_id)
>>> print(result)
epoch0002_era012_qewrbbgy
record_epoch(epoch_info)[source]
rewind_to(epoch_num)[source]
rrr(verbose=True, reload_module=True)

special class reloading function This function is often injected as rrr of classes

to_json()[source]
property total_epochs
property total_eras
class wbia_cnn.models.abstract_models.LearnState(learning_rate, momentum, weight_decay)[source]

Bases: utool.util_dict.DictLike

Keeps track of parameters that can be changed during theano execution

getitem(key)[source]
init()[source]
keys()[source]
property learning_rate
property momentum
rrr(verbose=True, reload_module=True)

special class reloading function This function is often injected as rrr of classes

setitem(key, value)[source]
property shared
property weight_decay
class wbia_cnn.models.abstract_models._BatchUtility[source]

Bases: object

_pad_labels(yb)[source]

# TODO: FIX data_per_label_input ISSUES # most models will do the padding implicitly # in the layer architecture

_stack_outputs(theano_fn, output_list)[source]

Combines outputs across batches and returns them in a dictionary keyed by the theano variable output name.

_unwrap_outputs(outputs, X)[source]
classmethod expand_data_indicies(label_idx, data_per_label=1)[source]

when data_per_label > 1, gives the corresponding data indicies for the data indicies

rrr(verbose=True, reload_module=True)

special class reloading function This function is often injected as rrr of classes

classmethod shuffle_input(X, y, w, data_per_label=1, rng=None)[source]
classmethod slice_batch(X, y, w, batch_size, batch_index, data_per_label=1, wraparound=False)[source]
class wbia_cnn.models.abstract_models._ModelBackend[source]

Bases: object

Functions that build and compile theano exepressions

_get_labeled_outputs()[source]
_get_network_output()[source]

gets the activations of the output neurons

_get_unlabeled_outputs()[source]
_init_compile_vars(kwargs)[source]
_make_monitor_outputs(parameters, updates)[source]

Builds parameters to monitor the magnitude of updates durning learning

_make_updates(parameters, backprop_loss_)[source]
_testdata_batch(dataset, batch_size=16)[source]
property _theano_fn_inputs
property _theano_loss_exprs

Requires that a custom loss function is defined in the inherited class

Ignore:
>>> from wbia_cnn.models.abstract_models import *  # NOQA
>>> from wbia_cnn.models import mnist
>>> import theano
>>> model, dataset = mnist.testdata_mnist(dropout=.5)
>>> model._init_compile_vars({})  # reset state
>>> model.init_arch()
>>> data, labels = dataset.subset('test')
>>> loss = model._theano_loss_exprs['loss']
>>> loss_item = model._theano_loss_exprs['loss_item']
>>> X_in = theano.In(model._theano_fn_inputs['X_batch'])
>>> y_in = theano.In(model._theano_fn_inputs['y_batch'])
>>> w_in = theano.In(model._theano_fn_inputs['w_batch'])
>>> # Eval
>>> input1 = {X_in: Xb, y_in: yb, w_in: wb}
>>> input2 = {X_in: Xb, y_in: yb}
>>> _loss = loss.eval(input1)
>>> _loss_item = loss_item.eval(input2)
build()[source]
build_backprop_func()[source]

Computes loss and updates model parameters. Returns diagnostic information.

build_forward_func()[source]

Computes loss, but does not learn. Returns diagnostic information.

Ignore:
>>> from wbia_cnn.models.abstract_models import *  # NOQA
>>> from wbia_cnn.models import mnist
>>> import theano
>>> model, dataset = mnist.testdata_mnist(dropout=.5)
>>> model.init_arch()
>>> batch_size = 16
>>> model.learn_state.init()
>>> Xb, yb, wb = model._testdata_batch(dataset, batch_size)
>>> loss = model._theano_exprs['loss'] = None
>>> loss_item = model._theano_loss_exprs['loss_item']
>>> X_in = theano.In(model._theano_fn_inputs['X_batch'])
>>> y_in = theano.In(model._theano_fn_inputs['y_batch'])
>>> w_in = theano.In(model._theano_fn_inputs['w_batch'])
>>> loss_batch = loss_item.eval({X_in: Xb, y_in: yb})
build_predict_func()[source]

Computes predictions given unlabeled data

custom_labeled_outputs(network_output, y_batch)[source]

override in inherited subclass to enable custom symbolic expressions based on the network output and the labels

custom_unlabeled_outputs(network_output)[source]

override in inherited subclass to enable custom symbolic expressions based on the network output alone

class wbia_cnn.models.abstract_models._ModelBatch[source]

Bases: wbia_cnn.models.abstract_models._BatchUtility

_init_batch_vars(kwargs)[source]
_prepare_batch(Xb_, yb_, wb_, is_int=True, is_cv2=True, augment_on=False, whiten_on=False)[source]
batch_iterator(X, y=None, w=None, shuffle=False, augment_on=False)[source]

Example

>>> # ENABLE_DOCTEST
>>> from wbia_cnn.models.abstract_models import *  # NOQA
>>> from wbia_cnn import models
>>> model = models.DummyModel(batch_size=16)
>>> X, y = model.make_random_testdata(num=37, cv2_format=True)
>>> model.ensure_data_params(X, y)
>>> result_list = [(Xb, Yb) for Xb, Yb in model.batch_iterator(X, y)]
>>> Xb, yb = result_list[0]
>>> assert np.all(X[0, :, :, 0] == Xb[0, 0, :, :])
>>> result = ut.depth_profile(result_list, compress_consecutive=True)
>>> print(result)
(7, [(16, 1, 4, 4), 16])

Example

>>> # ENABLE_DOCTEST
>>> from wbia_cnn.models.abstract_models import *  # NOQA
>>> from wbia_cnn import models
>>> model = models.DummyModel(batch_size=16)
>>> X, y = model.make_random_testdata(num=37, cv2_format=False, asint=True)
>>> model.X_is_cv2_native = False
>>> model.ensure_data_params(X, y)
>>> result_list = [(Xb, Yb) for Xb, Yb in model.batch_iterator(X, y)]
>>> Xb, yb = result_list[0]
>>> assert np.all(np.isclose(X[0] / 255, Xb[0]))
>>> result = depth
>>> print(result)
prepare_data(X, y=None, w=None)[source]

convenience function for external use

process_batch(theano_fn, X, y=None, w=None, buffered=False, unwrap=False, shuffle=False, augment_on=False)[source]

Execute a theano function on batches of X and y

rrr(verbose=True, reload_module=True)

special class reloading function This function is often injected as rrr of classes

class wbia_cnn.models.abstract_models._ModelFitter[source]

Bases: object

CommandLine:

python -m wbia_cnn _ModelFitter.fit:0

_default_input_weights(X, y, w=None)[source]
_dump_best_monitor()[source]
_dump_case_monitor(X_learn, y_learn, X_valid, y_valid)[source]
_dump_epoch_monitor()[source]
_dump_weight_monitor()[source]
_ensure_learnval_split(X_train, y_train, X_valid=None, y_valid=None, valid_idx=None)[source]
_epoch_clean(theano_forward, X_general, y_general, w_general, conf_thresh=0.95)[source]

Forwards propogate – Run set through the forwards pass and clean

_epoch_learn(theano_backprop, X_learn, y_learn, w_learn, epoch)[source]

Backwards propogate – Run learning set through the backwards pass

Ignore:
>>> from wbia_cnn.models.abstract_models import *  # NOQA
>>> from wbia_cnn.models import mnist
>>> import theano
>>> model, dataset = mnist.testdata_mnist(dropout=.5)
>>> model.monitor_config['monitor'] = False
>>> model.monitor_config['showprog'] = True
>>> model._behavior['buffered'] = False
>>> model.init_arch()
>>> model.learn_state.init()
>>> batch_size = 16
>>> X_learn, y_learn = dataset.subset('test')
>>> model.ensure_data_params(X_learn, y_learn)
>>> class_to_weight = model.data_params['class_to_weight']
>>> class_to_weight.take(y_learn)
>>> w_learn = class_to_weight.take(y_learn).astype(np.float32)
>>> model._new_fit_session()
>>> theano_backprop = model.build_backprop_func()
_epoch_validate(theano_forward, X_valid, y_valid, w_valid)[source]

Forwards propagate – Run validation set through the forwards pass

_epoch_validate_learn(theano_forward, X_learn, y_learn, w_learn)[source]

Forwards propagate – Run validation set through the forwards pass

_init_fit_vars(kwargs)[source]
_new_fit_session()[source]

Starts a model training session

_overwrite_latest_image(fpath, new_name)[source]

copies the new image to a path to be overwritten so new updates are shown

_rename_old_sessions()[source]
dump_cases(X, y, subset_id='unknown', dpath=None)[source]
For each class find:
  • the most-hard failures

  • the mid-level failures

  • the critical cases (least-hard failures / most-hard successes)

  • the mid-level successes

  • the least-hard successs

ensure_data_params(X_learn, y_learn)[source]
fit(X_train, y_train, X_valid=None, y_valid=None, valid_idx=None, X_test=None, y_test=None, verbose=True, **kwargs)[source]

Trains the network with backprop.

CommandLine:

python -m wbia_cnn _ModelFitter.fit –name=bnorm –vd –monitor python -m wbia_cnn _ModelFitter.fit –name=dropout python -m wbia_cnn _ModelFitter.fit –name=incep

Example1:
>>> from wbia_cnn.models import mnist
>>> model, dataset = mnist.testdata_mnist(defaultname='bnorm', dropout=.5)
>>> model.init_arch()
>>> model.print_layer_info()
>>> model.print_model_info_str()
>>> X_train, y_train = dataset.subset('train')
>>> model.fit(X_train, y_train)
get_report_json()[source]
rrr(verbose=True, reload_module=True)

special class reloading function This function is often injected as rrr of classes

class wbia_cnn.models.abstract_models._ModelIDs[source]

Bases: object

_init_id_vars(kwargs)[source]
property arch_id
CommandLine:

python -m wbia_cnn.models.abstract_models _ModelIDs.arch_id:0

Example

>>> # ENABLE_DOCTEST
>>> from wbia_cnn.models.abstract_models import *  # NOQA
>>> from wbia_cnn.models.mnist import MNISTModel
>>> model = MNISTModel(batch_size=128, data_shape=(24, 24, 1),
>>>                    output_dims=10, name='bnorm')
>>> model.init_arch()
>>> result = str(model.arch_id)
>>> print(result)
get_arch_hashid()[source]

Returns a hash identifying the architecture of the determenistic net. This does not involve any dropout or noise layers, nor does the initialization of the weights matter.

get_arch_nice()[source]

Makes a string that shows the number of input units, output units, hidden units, parameters, and model depth.

CommandLine:

python -m wbia_cnn.models.abstract_models get_arch_nice –show

Example

>>> # ENABLE_DOCTEST
>>> from wbia_cnn.models.abstract_models import *  # NOQA
>>> from wbia_cnn.models.mnist import MNISTModel
>>> model = MNISTModel(batch_size=128, data_shape=(24, 24, 1),
>>>                    output_dims=10)
>>> model.init_arch()
>>> result = str(model.get_arch_nice())
>>> print(result)
o10_d4_c107
property hash_id
rrr(verbose=True, reload_module=True)

special class reloading function This function is often injected as rrr of classes

class wbia_cnn.models.abstract_models._ModelIO[source]

Bases: object

_get_model_dpath(dpath, checkpoint_tag)[source]
_get_model_file_fpath(default_fname, fpath, dpath, fname, checkpoint_tag)[source]
_init_io_vars(kwargs)[source]
property arch_dpath
property checkpoint_dpath
checkpoint_save_model_info()[source]
checkpoint_save_model_state()[source]
get_model_info_fpath(fpath=None, dpath=None, fname=None, checkpoint_tag=None)[source]
get_model_state_fpath(fpath=None, dpath=None, fname=None, checkpoint_tag=None)[source]
has_saved_state(checkpoint_tag=None)[source]

Check if there are any saved model states matching the checkpoing tag.

list_saved_checkpoints()[source]
load_extern_weights(**kwargs)[source]

load weights from another model

load_model_state(**kwargs)[source]

kwargs = {} TODO: resolve load_model_state and load_extern_weights into a single

function that is less magic in what it does and more straightforward

Example

>>> # Assumes mnist is trained
>>> from wbia_cnn.models.abstract_models import  *  # NOQA
>>> from wbia_cnn.models import mnist
>>> model, dataset = mnist.testdata_mnist()
>>> model.init_arch()
>>> model.load_model_state()
print_structure()[source]
CommandLine:

python -m wbia_cnn.models.abstract_models print_structure –show

Example

>>> from wbia_cnn.ingest_data import *  # NOQA
>>> dataset = grab_mnist_category_dataset()
>>> dataset.print_dir_structure()
>>> # ----
>>> from wbia_cnn.models.mnist import MNISTModel
>>> model = MNISTModel(batch_size=128, data_shape=(24, 24, 1),
>>>                    output_dims=10, dataset_dpath=dataset.dataset_dpath)
>>> model.print_structure()
resolve_fuzzy_checkpoint_pattern(checkpoint_pattern, extern_dpath=None)[source]

tries to find a matching checkpoint so you dont have to type a full hash

rrr(verbose=True, reload_module=True)

special class reloading function This function is often injected as rrr of classes

save_model_info(**kwargs)[source]

save model information (history and results but no weights)

save_model_state(**kwargs)[source]

saves current model state

property saved_session_dpath
property trained_arch_dpath
property trained_model_dpath
class wbia_cnn.models.abstract_models._ModelPredicter[source]

Bases: object

_predict(X_test)[source]

Returns all prediction outputs of the network in a dictionary.

predict(X_test)[source]
predict_proba(X_test)[source]
predict_proba_Xb(Xb)[source]

Accepts prepared inputs

class wbia_cnn.models.abstract_models._ModelStrings[source]

Bases: object

get_arch_str(sep='_', with_noise=False)[source]

with_noise is a boolean that specifies if layers that doesnt affect the flow of information in the determenistic setting are to be included. IE get rid of dropout.

CommandLine:

python -m wbia_cnn.models.abstract_models _ModelStrings.get_arch_str:0

Example

>>> # ENABLE_DOCTEST
>>> from wbia_cnn.models.abstract_models import *  # NOQA
>>> from wbia_cnn.models.mnist import MNISTModel
>>> model = MNISTModel(batch_size=128, data_shape=(28, 28, 1),
>>>                    output_dims=10, batch_norm=True)
>>> model.init_arch()
>>> result = model.get_arch_str(sep=ut.NEWLINE, with_noise=False)
>>> print(result)
InputLayer(name=I0,shape=(128, 1, 24, 24))
Conv2DDNNLayer(name=C1,num_filters=32,stride=(1, 1),nonlinearity=rectify)
MaxPool2DDNNLayer(name=P1,stride=(2, 2))
Conv2DDNNLayer(name=C2,num_filters=32,stride=(1, 1),nonlinearity=rectify)
MaxPool2DDNNLayer(name=P2,stride=(2, 2))
DenseLayer(name=F3,num_units=256,nonlinearity=rectify)
DenseLayer(name=O4,num_units=10,nonlinearity=softmax)
get_layer_info_str()[source]
CommandLine:

python -m wbia_cnn.models.abstract_models _ModelStrings.get_layer_info_str:0

Example

>>> # ENABLE_DOCTEST
>>> from wbia_cnn.models.abstract_models import *  # NOQA
>>> from wbia_cnn.models.mnist import MNISTModel
>>> model = MNISTModel(batch_size=128, data_shape=(24, 24, 1),
>>>                    output_dims=10)
>>> model.init_arch()
>>> result = model.get_layer_info_str()
>>> print(result)
Network Structure:
 index  Name  Layer               Outputs      Bytes OutShape           Params
 0      I0    InputLayer              576    294,912 (128, 1, 24, 24)   []
 1      C1    Conv2DDNNLayer       12,800  6,556,928 (128, 32, 20, 20)  [C1.W(32,1,5,5, {t,r}), C1.b(32, {t})]
 2      P1    MaxPool2DDNNLayer     3,200  1,638,400 (128, 32, 10, 10)  []
 3      C2    Conv2DDNNLayer        1,152    692,352 (128, 32, 6, 6)    [C2.W(32,32,5,5, {t,r}), C2.b(32, {t})]
 4      P2    MaxPool2DDNNLayer       288    147,456 (128, 32, 3, 3)    []
 5      D2    DropoutLayer            288    147,456 (128, 32, 3, 3)    []
 6      F3    DenseLayer              256    427,008 (128, 256)         [F3.W(288,256, {t,r}), F3.b(256, {t})]
 7      D3    DropoutLayer            256    131,072 (128, 256)         []
 8      O4    DenseLayer               10     15,400 (128, 10)          [O4.W(256,10, {t,r}), O4.b(10, {t})]
...this model has 103,018 learnable parameters
...this model will use 10,050,984 bytes = 9.59 MB
get_state_str(other_override_reprs={})[source]
make_arch_json(with_noise=False)[source]
CommandLine:

python -m wbia_cnn.models.abstract_models make_arch_json –show

Example

>>> # ENABLE_DOCTEST
>>> from wbia_cnn.models.abstract_models import *  # NOQA
>>> from wbia_cnn.models import mnist
>>> model, dataset = mnist.testdata_mnist(defaultname='resnet')
>>> #model = mnist.MNISTModel(batch_size=128, data_shape=(28, 28, 1),
>>> #                         output_dims=10, batch_norm=True)
>>> model.init_arch()
>>> json_str = model.make_arch_json(with_noise=True)
>>> print(json_str)
print_arch_str(sep='\n ')[source]
print_layer_info()[source]
print_model_info_str()[source]
print_state_str(**kwargs)[source]
rrr(verbose=True, reload_module=True)

special class reloading function This function is often injected as rrr of classes

class wbia_cnn.models.abstract_models._ModelUtility[source]

Bases: object

_validate_data(X_train)[source]

Check to make sure data agrees with model input

_validate_input(X, y=None, w=None)[source]
_validate_labels(X, y, w)[source]
get_all_layer_info()[source]
get_all_layers(with_noise=True, with_weightless=True)[source]
get_all_param_values()[source]
get_all_params(**tags)[source]
get_output_layer()[source]
property layers_

for compatibility with nolearn visualizations

make_random_testdata(num=37, rng=0, cv2_format=False, asint=False)[source]
set_all_param_values(weights_list)[source]
class wbia_cnn.models.abstract_models._ModelVisualization[source]

Bases: object

CommandLine:

python -m wbia_cnn.models.abstract_models _ModelVisualization python -m wbia_cnn.models.abstract_models _ModelVisualization –show

Example

>>> # DISABLE_DOCTEST
>>> from wbia_cnn.models.abstract_models import *  # NOQA
>>> from wbia_cnn.models import dummy
>>> model = dummy.DummyModel(batch_size=16, autoinit=False)
>>> #model._theano_mode = theano.compile.Mode(linker='py', optimizer='fast_compile')
>>> #model._theano_mode = theano.compile.Mode(linker='py', optimizer='fast_compile')
>>> model_theano_mode = theano.compile.FAST_COMPILE
>>> model.init_arch()
>>> X, y = model.make_random_testdata(num=27, cv2_format=True, asint=False)
>>> model.fit(X, y, max_epochs=10, era_size=3, buffered=False)
>>> fnum = None
>>> import plottool as pt
>>> pt.qt4ensure()
>>> fnum = 1
>>> model.show_loss_history(fnum)
>>> #model.show_era_report(fnum)
>>> ut.show_if_requested()
_show_era_acc(**kwargs)[source]
_show_era_class_pr(types=['valid', 'learn'], measures=['precision', 'recall'], **kwargs)[source]
_show_era_loss(**kwargs)[source]
_show_era_lossratio(**kwargs)[source]
_show_era_measure(ydatas, labels=None, styles=None, xdatas=None, xlabel='epoch', ylabel='', yspreads=None, colors=None, fnum=None, pnum=1, 1, 1, yscale='log')[source]
dump_class_dream(fpath)[source]

initial =

imwrite_arch(fpath=None, fullinfo=True)[source]
Parameters

fpath (str) – file path string(default = None)

CommandLine:

python -m wbia_cnn.models.abstract_models imwrite_arch –show

Example

>>> # DISABLE_DOCTEST
>>> from wbia_cnn.models.abstract_models import *  # NOQA
>>> from wbia_cnn.models.mnist import MNISTModel
>>> model = MNISTModel(batch_size=128, data_shape=(24, 24, 1),
>>>                    output_dims=10, batch_norm=False, name='mnist')
>>> model.init_arch()
>>> fapth = model.imwrite_arch()
>>> ut.quit_if_noshow()
>>> ut.startfile(fapth)
render_arch(fullinfo=True)[source]
rrr(verbose=True, reload_module=True)

special class reloading function This function is often injected as rrr of classes

show_arch(fnum=None, fullinfo=True, **kwargs)[source]
show_class_dream(fnum=None, **kwargs)[source]
CommandLine:

python -m wbia_cnn.models.abstract_models show_class_dream –show

Example

>>> # DISABLE_DOCTEST
>>> # Assumes mnist is trained
>>> from wbia_cnn.draw_net import *  # NOQA
>>> from wbia_cnn.models import mnist
>>> model, dataset = mnist.testdata_mnist(dropout=.5)
>>> model.init_arch()
>>> model.load_model_state()
>>> ut.quit_if_noshow()
>>> import plottool as pt
>>> #pt.qt4ensure()
>>> model.show_class_dream()
>>> ut.show_if_requested()
show_loss_history(fnum=None)[source]
Parameters

fnum (int) – figure number(default = None)

Returns

fig

Return type

mpl.Figure

CommandLine:

python -m wbia_cnn _ModelVisualization.show_loss_history –show

Example

>>> # DISABLE_DOCTEST
>>> from wbia_cnn.models.abstract_models import *  # NOQA
>>> model = testdata_model_with_history()
>>> fnum = None
>>> model.show_loss_history(fnum)
>>> ut.show_if_requested()
show_pr_history(fnum=None)[source]
CommandLine:

python -m wbia_cnn –tf _ModelVisualization.show_pr_history –show

Example

>>> # DISABLE_DOCTEST
>>> from wbia_cnn.models.abstract_models import *  # NOQA
>>> model = testdata_model_with_history()
>>> fnum = None
>>> model.show_pr_history(fnum)
>>> ut.show_if_requested()
show_regularization_stuff(fnum=None, pnum=1, 1, 1)[source]
show_update_mag_history(fnum=None)[source]
CommandLine:

python -m wbia_cnn –tf _ModelVisualization.show_update_mag_history –show

Example

>>> # DISABLE_DOCTEST
>>> from wbia_cnn.models.abstract_models import *  # NOQA
>>> model = testdata_model_with_history()
>>> fnum = None
>>> model.show_update_mag_history(fnum)
>>> ut.show_if_requested()
show_weight_updates(param_keys=None, **kwargs)[source]
show_weights_image(index=0, *args, **kwargs)[source]
wbia_cnn.models.abstract_models.delayed_import()[source]
wbia_cnn.models.abstract_models.report_error(msg)[source]
wbia_cnn.models.abstract_models.testdata_model_with_history()[source]

wbia_cnn.models.aoi module

class wbia_cnn.models.aoi.AoIModel(autoinit=False, batch_size=128, data_shape=64, 64, 3, name='aoi', **kwargs)[source]

Bases: wbia_cnn.models.abstract_models.AbstractVectorVectorModel

augment(Xb, yb=None, wb=None, parallel=False)[source]
get_aoi_def(verbose=False, **kwargs)[source]
init_arch(verbose=True, **kwargs)[source]
rrr(verbose=True, reload_module=True)

special class reloading function This function is often injected as rrr of classes

wbia_cnn.models.aoi.augment_parallel(X, y, w)[source]
wbia_cnn.models.aoi.augment_wrapper(Xb, yb=None, wb=None)[source]
wbia_cnn.models.aoi.train_aoi(output_path, data_fpath, labels_fpath)[source]
CommandLine:

python -m wbia_cnn.train –test-train_aoi

Example

>>> # DISABLE_DOCTEST
>>> from wbia_cnn.train import *  # NOQA
>>> result = train_aoi()
>>> print(result)

wbia_cnn.models.aoi2 module

class wbia_cnn.models.aoi2.AoI2Model(autoinit=False, batch_size=128, data_shape=64, 64, 3, name='aoi2', **kwargs)[source]

Bases: wbia_cnn.models.abstract_models.AbstractCategoricalModel

augment(Xb, yb=None, wb=None, parallel=True)[source]
get_aoi2_def(verbose=False, **kwargs)[source]
init_arch(verbose=False, **kwargs)[source]
rrr(verbose=True, reload_module=True)

special class reloading function This function is often injected as rrr of classes

wbia_cnn.models.aoi2.augment_parallel(X, y, w)[source]
wbia_cnn.models.aoi2.augment_wrapper(Xb, yb=None, wb=None)[source]
wbia_cnn.models.aoi2.train_aoi2(output_path, data_fpath, labels_fpath, purge=True)[source]
CommandLine:

python -m wbia_cnn.train –test-train_aoi2

Example

>>> # DISABLE_DOCTEST
>>> from wbia_cnn.train import *  # NOQA
>>> result = train_aoi2()
>>> print(result)

wbia_cnn.models.background module

class wbia_cnn.models.background.BackgroundModel(autoinit=False, batch_size=128, data_shape=48, 48, 3, num_output=2, **kwargs)[source]

Bases: wbia_cnn.models.abstract_models.AbstractCategoricalModel

augment(Xb, yb=None)[source]
get_background_def(verbose=False, **kwargs)[source]
init_arch(verbose=False, **kwargs)[source]
learning_rate_shock(x)[source]
learning_rate_update(x)[source]
rrr(verbose=True, reload_module=True)

special class reloading function This function is often injected as rrr of classes

class wbia_cnn.models.background.NonlinearityLayerSpatial(incoming, nonlinearity=<function rectify>, **kwargs)[source]

Bases: Lasagne.lasagne.layers.special.NonlinearityLayer

get_output_for(input, **kwargs)[source]

Propagates the given input through this layer (and only this layer).

Parameters

input (Theano expression) – The expression to propagate through this layer.

Returns

output – The output of this layer given the input to this layer.

Return type

Theano expression

Notes

This is called by the base lasagne.layers.get_output() to propagate data through a network.

This method should be overridden when implementing a new Layer class. By default it raises NotImplementedError.

get_output_shape_for(input_shape)[source]

Computes the output shape of this layer, given an input shape.

Parameters

input_shape (tuple) – A tuple representing the shape of the input. The tuple should have as many elements as there are input dimensions, and the elements should be integers or None.

Returns

A tuple representing the shape of the output of this layer. The tuple has as many elements as there are output dimensions, and the elements are all either integers or None.

Return type

tuple

Notes

This method will typically be overridden when implementing a new Layer class. By default it simply returns the input shape. This means that a layer that does not modify the shape (e.g. because it applies an elementwise operation) does not need to override this method.

wbia_cnn.models.background.train_background(output_path, data_fpath, labels_fpath)[source]
CommandLine:

python -m wbia_cnn.train –test-train_background

Example

>>> # DISABLE_DOCTEST
>>> from wbia_cnn.train import *  # NOQA
>>> result = train_background()
>>> print(result)

wbia_cnn.models.classifier module

class wbia_cnn.models.classifier.ClassifierModel(autoinit=False, batch_size=128, data_shape=64, 64, 3, name='classifier', **kwargs)[source]

Bases: wbia_cnn.models.abstract_models.AbstractCategoricalModel

augment(Xb, yb=None, parallel=True)[source]
get_classifier_def(verbose=False, **kwargs)[source]
init_arch(verbose=False, **kwargs)[source]
rrr(verbose=True, reload_module=True)

special class reloading function This function is often injected as rrr of classes

wbia_cnn.models.classifier.augment_parallel(X, y)[source]
wbia_cnn.models.classifier.augment_wrapper(Xb, yb=None)[source]
wbia_cnn.models.classifier.train_classifier(output_path, data_fpath, labels_fpath)[source]
CommandLine:

python -m wbia_cnn.train –test-train_classifier

Example

>>> # DISABLE_DOCTEST
>>> from wbia_cnn.train import *  # NOQA
>>> result = train_classifier()
>>> print(result)

wbia_cnn.models.classifier2 module

class wbia_cnn.models.classifier2.Classifier2Model(autoinit=False, batch_size=128, data_shape=64, 64, 3, name='classifier2', **kwargs)[source]

Bases: wbia_cnn.models.abstract_models.AbstractVectorModel

augment(Xb, yb=None, parallel=True)[source]
get_classifier2_def(verbose=False, **kwargs)[source]
init_arch(verbose=False, **kwargs)[source]
rrr(verbose=True, reload_module=True)

special class reloading function This function is often injected as rrr of classes

wbia_cnn.models.classifier2.augment_parallel(X, y)[source]
wbia_cnn.models.classifier2.augment_wrapper(Xb, yb=None)[source]
wbia_cnn.models.classifier2.train_classifier2(output_path, data_fpath, labels_fpath, purge=True)[source]
CommandLine:

python -m wbia_cnn.train –test-train_classifier2

Example

>>> # DISABLE_DOCTEST
>>> from wbia_cnn.train import *  # NOQA
>>> result = train_classifier2()
>>> print(result)

wbia_cnn.models.dummy module

class wbia_cnn.models.dummy.DummyModel(batch_size=8, data_shape=4, 4, 1, **kwargs)[source]

Bases: wbia_cnn.models.abstract_models.AbstractCategoricalModel

init_arch(verbose=True)[source]
CommandLine:

python -m wbia_cnn DummyModel.init_arch –verbcnn –show

Example

>>> # ENABLE_DOCTEST
>>> from wbia_cnn.models.dummy import *  # NOQA
>>> model = DummyModel(autoinit=True)
>>> model.print_model_info_str()
>>> print(model)
>>> ut.quit_if_noshow()
>>> model.show_arch()
>>> ut.show_if_requested()
rrr(verbose=True, reload_module=True)

special class reloading function This function is often injected as rrr of classes

wbia_cnn.models.labeler module

class wbia_cnn.models.labeler.LabelerModel(autoinit=False, batch_size=128, data_shape=64, 64, 3, name='labeler', **kwargs)[source]

Bases: wbia_cnn.models.abstract_models.AbstractCategoricalModel

augment(Xb, yb=None, parallel=True)[source]
get_labeler_def(verbose=False, **kwargs)[source]
init_arch(verbose=False, **kwargs)[source]
rrr(verbose=True, reload_module=True)

special class reloading function This function is often injected as rrr of classes

wbia_cnn.models.labeler.augment_parallel(X, y)[source]
wbia_cnn.models.labeler.augment_wrapper(Xb, yb=None)[source]
wbia_cnn.models.labeler.train_labeler(output_path, data_fpath, labels_fpath)[source]
CommandLine:

python -m wbia_cnn.train –test-train_labeler

Example

>>> # DISABLE_DOCTEST
>>> from wbia_cnn.train import *  # NOQA
>>> result = train_labeler()
>>> print(result)

wbia_cnn.models.mnist module

class wbia_cnn.models.mnist.MNISTModel(**kwargs)[source]

Bases: wbia_cnn.models.abstract_models.AbstractCategoricalModel

Toy model for testing and playing with mnist

CommandLine:

python -m wbia_cnn.models.mnist MNISTModel:0 python -m wbia_cnn.models.mnist MNISTModel:1

python -m wbia_cnn _ModelFitting.fit:0 –vd –monitor python -m wbia_cnn _ModelFitting.fit:1 –vd

Example

>>> # ENABLE_DOCTEST
>>> from wbia_cnn.models.mnist import *  # NOQA
>>> from wbia_cnn import ingest_data
>>> dataset = ingest_data.grab_mnist_category_dataset_float()
>>> model = MNISTModel(batch_size=128, data_shape=dataset.data_shape,
>>>                    output_dims=dataset.output_dims,
>>>                    training_dpath=dataset.training_dpath)
>>> output_layer = model.init_arch()
>>> model.print_model_info_str()
>>> model.mode = 'FAST_COMPILE'
>>> model.build_backprop_func()

Example

>>> # ENABLE_DOCTEST
>>> from wbia_cnn.models.mnist import *  # NOQA
>>> from wbia_cnn.models import mnist
>>> model, dataset = mnist.testdata_mnist()
>>> model.init_arch()
>>> model.print_layer_info()
>>> model.print_model_info_str()
>>> #model.reinit_weights()
>>> X_train, y_train = dataset.subset('train')
>>> model.fit(X_train, y_train)
>>> output_layer = model.init_arch()
>>> model.print_layer_info()
>>> # parse training arguments
>>> model.monitor_config.update(**ut.argparse_dict(dict(
>>>     era_size=100,
>>>     max_epochs=5,
>>>     rate_schedule=.8,
>>> )))
>>> X_train, y_train = dataset.subset('train')
>>> model.fit(X_train, y_train)
augment(Xb, yb=None)[source]
CommandLine:

python -m wbia_cnn.models.mnist MNISTModel.augment –show

Example

>>> from wbia_cnn.models.mnist import *  # NOQA
>>> from wbia_cnn.models import mnist
>>> import numpy as np
>>> model, dataset = mnist.testdata_mnist()
>>> model._rng = ut.ensure_rng(model.hyperparams['random_seed'])
>>> X_valid, y_valid = dataset.subset('test')
>>> num = 10
>>> Xb = X_valid[:num]
>>> yb = None
>>> Xb = Xb / 255.0 if ut.is_int(Xb) else Xb
>>> Xb = Xb.astype(np.float32, copy=True)
>>> yb = None if yb is None else yb.astype(np.int32, copy=True)
>>> # Rescale the batch data to the range 0 to 1
>>> Xb_, yb_ = model.augment(Xb.copy())
>>> yb_ = None
>>> ut.quit_if_noshow()
>>> import plottool as pt
>>> pt.qt4ensure()
>>> from wbia_cnn import augment
>>> augment.show_augmented_patches(Xb, Xb_, yb, yb_, data_per_label=1)
>>> ut.show_if_requested()
fit(*args, **kwargs)[source]
CommandLine:

python -m wbia_cnn.models.mnist MNISTModel.fit –show

Example

>>> # ENABLE_DOCTEST
>>> from wbia_cnn.models.mnist import *  # NOQA
>>> from wbia_cnn.models import mnist
>>> model, dataset = mnist.testdata_mnist()
>>> model.init_arch()
>>> # parse training arguments
>>> model.monitor_config.update(**ut.argparse_dict(dict(
>>>     era_size=20,
>>>     max_epochs=100,
>>>     rate_schedule=.9,
>>> )))
>>> X_train, y_train = dataset.subset('train')
>>> model.fit(X_train, y_train)
get_inception_def()[source]

python -m wbia_cnn MNISTModel.init_arch –verbcnn –name=resnet –show python -m wbia_cnn.models.mnist MNISTModel.fit:0 –name=resnet –vd –monitor

get_lenet_def()[source]

python -m wbia_cnn MNISTModel.init_arch –verbcnn –name=lenet –show python -m wbia_cnn.models.mnist MNISTModel.fit:0 –name=lenet –vd –monitor

get_mnist_def()[source]

Follows https://github.com/Lasagne/Lasagne/blob/master/examples/mnist.py

python -m wbia_cnn MNISTModel.init_arch –verbcnn –name=mnist –show python -m wbia_cnn.models.mnist MNISTModel.fit:0 –name=mnist –vd –monitor

get_resnet_def()[source]

A residual network with pre-activations

python -m wbia_cnn MNISTModel.init_arch –verbcnn –name=resnet –show python -m wbia_cnn.models.mnist MNISTModel.fit:0 –name=resnet –vd –monitor

init_arch()[source]
CommandLine:

python -m wbia_cnn MNISTModel.init_arch –verbcnn python -m wbia_cnn MNISTModel.init_arch –verbcnn –show

python -m wbia_cnn MNISTModel.init_arch –verbcnn –name=bnorm –show python -m wbia_cnn MNISTModel.init_arch –verbcnn –name=incep –show

Example

>>> # ENABLE_DOCTEST
>>> from wbia_cnn.models.mnist import *  # NOQA
>>> verbose = True
>>> name = ut.get_argval('--name', default='bnorm')
>>> model = MNISTModel(batch_size=128, data_shape=(28, 28, 1),
>>>                    output_dims=10, name=name)
>>> model.init_arch()
>>> model.print_model_info_str()
>>> print(model)
>>> ut.quit_if_noshow()
>>> model.show_arch()
>>> ut.show_if_requested()
rrr(verbose=True, reload_module=True)

special class reloading function This function is often injected as rrr of classes

wbia_cnn.models.mnist.testdata_mnist(defaultname='lenet', batch_size=128, dropout=None)[source]

wbia_cnn.models.pretrained module

class wbia_cnn.models.pretrained.PretrainedNetwork(model_key=None, show_network=False)[source]

Bases: object

TODO: move to new class

Intialize weights from a specified (Caffe) pretrained network layers

Parameters

layer (int) – int

CommandLine:

python -m wbia_cnn –tf PretrainedNetwork:0 python -m wbia_cnn –tf PretrainedNetwork:1

Example0:
>>> # DISABLE_DOCTEST
>>> from wbia_cnn.models import *  # NOQA
>>> self = PretrainedNetwork('caffenet', show_network=True)
>>> print('done')
Example1:
>>> # DISABLE_DOCTEST
>>> from wbia_cnn.models import *  # NOQA
>>> self = PretrainedNetwork('vggnet', show_network=True)
>>> print('done')
get_conv2d_layer(layer_index, name=None, **kwargs)[source]

Assumes requested layer is convolutional

Returns

Layer

Return type

lasagne.layers.Layer

get_layer_filter_size(layer_index)[source]
get_layer_num_filters(layer_index)[source]
get_num_layers()[source]
get_pretrained_layer(layer_index, rand=False)[source]

wbia_cnn.models.quality module

class wbia_cnn.models.quality.QualityModel[source]

Bases: wbia_cnn.models.abstract_models.AbstractCategoricalModel

build_model(batch_size, input_width, input_height, input_channels, output_dims)[source]
label_order_mapping(category_list)[source]
learning_rate_shock(x)[source]
learning_rate_update(x)[source]
rrr(verbose=True, reload_module=True)

special class reloading function This function is often injected as rrr of classes

wbia_cnn.models.siam module

Siamese based models

References

http://yann.lecun.com/exdb/publis/pdf/hadsell-chopra-lecun-06.pdf https://github.com/BVLC/caffe/pull/959 http://yann.lecun.com/exdb/publis/pdf/chopra-05.pdf http://www.commendo.at/references/files/paperCVWW08.pdf https://tspace.library.utoronto.ca/bitstream/1807/43097/3/Liu_Chen_201311_MASc_thesis.pdf http://arxiv.org/pdf/1412.6622.pdf http://papers.nips.cc/paper/4314-extracting-speaker-specific-information-with-a-regularized-siamese-deep-network.pdf http://machinelearning.wustl.edu/mlpapers/paper_files/NIPS2005_265.pdf http://vision.ia.ac.cn/zh/senimar/reports/Siamese-Network-Architecture-and-Applications-in-Computer-Vision.pdf

https://groups.google.com/forum/#!topic/caffe-users/D-7sRDw9v8c http://caffe.berkeleyvision.org/gathered/examples/siamese.html https://groups.google.com/forum/#!topic/lasagne-users/N9zDNvNkyWY http://www.cs.nyu.edu/~sumit/research/research.html https://github.com/Lasagne/Lasagne/issues/168 https://groups.google.com/forum/#!topic/lasagne-users/7JX_8zKfDI0

class wbia_cnn.models.siam.AbstractSiameseModel(*args, **kwargs)[source]

Bases: wbia_cnn.models.abstract_models.BaseModel

augment(Xb, yb=None)[source]
rrr(verbose=True, reload_module=True)

special class reloading function This function is often injected as rrr of classes

class wbia_cnn.models.siam.SiameseCenterSurroundModel(autoinit=False, batch_size=128, input_shape=None, data_shape=64, 64, 3, **kwargs)[source]

Bases: wbia_cnn.models.siam.AbstractSiameseModel

Model for individual identification

get_2ch2stream_def(verbose=True, **kwargs)[source]

Notes

  1. 2ch-2stream consists of two branches C(95, 5, 1)- ReLU- P(2, 2)- C(96, 3, 1)- ReLU- P(2, 2)- C(192, 3, 1)-

    ReLU- C(192, 3, 1)- ReLU,

    one for central and one for surround parts, followed by F(768)- ReLU- F(1)

get_siam2stream_def(verbose=True, **kwargs)[source]

Notes

  1. siam-2stream has 4 branches

C(96, 4, 2)- ReLU- P(2, 2)- C(192, 3, 1)- ReLU- C(256, 3, 1)- ReLU- C(256, 3, 1)-

ReLU

(coupled in pairs for central and surround streams, and decision layer) F(512)-ReLU- F(1)

get_siam2stream_l2_def(verbose=True, **kwargs)[source]

Notes

  1. siam-2stream-l2 consists of one central and one surround

branch of siam-2stream.

init_arch(verbose=True, **kwargs)[source]

Notes

http://arxiv.org/pdf/1504.03641.pdf

CommandLine:

python -m wbia_cnn.models.siam –test-SiameseCenterSurroundModel.init_arch python -m wbia_cnn.models.siam –test-SiameseCenterSurroundModel.init_arch –verbcnn python -m wbia_cnn.models.siam –test-SiameseCenterSurroundModel.init_arch –verbcnn –show python -m wbia_cnn.train –test-pz_patchmatch –vtd –max-examples=5 –batch_size=128 –learning_rate .0000001 –verbcnn python -m wbia_cnn.train –test-pz_patchmatch –vtd

Example

>>> # ENABLE_DOCTEST
>>> from wbia_cnn.models import *  # NOQA
>>> # build test data
>>> batch_size = 128
>>> input_shape = (batch_size, 3, 64, 64)
>>> verbose = True
>>> model = SiameseCenterSurroundModel(batch_size=batch_size, input_shape=input_shape)
>>> # execute function
>>> output_layer = model.init_arch()
>>> model.print_model_info_str()
>>> result = str(output_layer)
>>> print(result)
>>> ut.quit_if_noshow()
>>> import plottool as pt
>>> model.show_arch()
>>> ut.show_if_requested()
learn_encoder(labels, scores, **kwargs)[source]
loss_function(network_output, Y, T=<module 'theano.tensor' from '/Users/jason.parham/virtualenv/wildme3.7/lib/python3.7/site-packages/Theano-1.0.5-py3.7.egg/theano/tensor/__init__.py'>, verbose=True)[source]
CommandLine:

python -m wbia_cnn.models.siam –test-loss_function python -m wbia_cnn.models.siam –test-loss_function:1 –show

Example

>>> # ENABLE_DOCTEST
>>> from wbia_cnn.models import *  # NOQA
>>> from wbia_cnn import ingest_data
>>> from wbia_cnn import batch_processing as batch
>>> data, labels = ingest_data.testdata_patchmatch()
>>> model = SiameseCenterSurroundModel(autoinit=True, input_shape=(128,) + (data.shape[1:]))
>>> theano_forward = batch.create_unbuffered_network_output_func(model)
>>> batch_size = model.batch_size
>>> Xb, yb = data[0:batch_size * model.data_per_label_input], labels[0:batch_size]
>>> network_output = theano_forward(Xb)[0]
>>> network_output = network_output
>>> Y = yb
>>> T = np
>>> # execute function
>>> verbose = True
>>> avg_loss = model.loss_function(network_output, Y, T=T)
>>> result = str(avg_loss)
>>> print(result)
Example1:
>>> # ENABLE_DOCTEST
>>> from wbia_cnn.models import *  # NOQA
>>> network_output = np.linspace(-2, 2, 128)
>>> Y0 = np.zeros(len(network_output), np.float32)
>>> Y1 = np.ones(len(network_output), np.float32)
>>> verbose = False
>>> T = np
>>> Y = Y0
>>> func = SiameseCenterSurroundModel.loss_function
>>> loss0, Y0_ = ut.exec_func_src(func, globals(), locals(), ['loss', 'Y_'])
>>> Y = Y1
>>> loss1, Y1_ = ut.exec_func_src(func, globals(), locals(), ['loss', 'Y_'])
>>> assert np.all(Y1 == 1) and np.all(Y1_ == 1), 'bad label mapping'
>>> assert np.all(Y0 == 0) and np.all(Y0_ == -1), 'bad label mapping'
>>> ut.quit_if_noshow()
>>> import plottool as pt
>>> pt.plot2(network_output, loss0, '-', color=pt.TRUE_BLUE, label='imposter_loss', y_label='network output')
>>> pt.plot2(network_output, loss1, '-', color=pt.FALSE_RED, label='genuine_loss', y_label='network output')
>>> pt.legend()
>>> ut.show_if_requested()
rrr(verbose=True, reload_module=True)

special class reloading function This function is often injected as rrr of classes

class wbia_cnn.models.siam.SiameseL2(autoinit=False, batch_size=128, data_shape=64, 64, 3, arch_tag='siaml2', **kwargs)[source]

Bases: wbia_cnn.models.siam.AbstractSiameseModel

Model for individual identification

get_mnist_siaml2_def(verbose=True, **kwargs)[source]

python -m wbia_cnn –tf SiameseL2.init_arch –archtag mnist_siaml2 –datashape=28,28,1 –verbose –show

get_siam2streaml2_def(verbose=True, **kwargs)[source]

Notes

  1. siam-2stream-l2 consists of one central and one surround

branch of siam-2stream.

C0(96, 7, 3) - ReLU - P0(2, 2) - C1(192, 5, 1) - ReLU - P1(2, 2) - C2(256, 3, 1)

CommandLine:

python -m wbia_cnn –tf SiameseL2.init_arch –archtag siam2streaml2 –datashape=64,64,1 –verbose –show

get_siam_deepfaceish_def(verbose=True, **kwargs)[source]
CommandLine:

python -m wbia_cnn –tf SiameseL2.init_arch –archtag siam_deepfaceish –datashape=128,256,1 –verbose –show python -m wbia_cnn –tf SiameseL2.init_arch –archtag siam_deepface –datashape=152,152,3 –verbose –show

get_siaml2_128_def(verbose=True, **kwargs)[source]

Notes

  1. siam-2stream-l2 consists of one central and one surround

branch of siam-2stream.

C0(96, 7, 3) - ReLU - P0(2, 2) - C1(192, 5, 1) - ReLU - P1(2, 2) - C2(256, 3, 1)

get_siaml2_def(verbose=True, **kwargs)[source]

Notes

  1. siam-2stream-l2 consists of one central and one surround

branch of siam-2stream.

C0(96, 7, 3) - ReLU - P0(2, 2) - C1(192, 5, 1) - ReLU - P1(2, 2) - C2(256, 3, 1)

get_siaml2_partmatch_def(verbose=True, **kwargs)[source]
CommandLine:

python -m wbia_cnn –tf SiameseL2.init_arch –archtag siaml2_partmatch –datashape=128,256,1 –verbose –show

init_arch(verbose=False, **kwargs)[source]

Notes

http://arxiv.org/pdf/1504.03641.pdf

CommandLine:

python -m wbia_cnn.models.siam –test-SiameseL2.init_arch –verbcnn –show python -m wbia_cnn –tf SiameseL2.init_arch –verbcnn –show

Example

>>> # ENABLE_DOCTEST
>>> from wbia_cnn.models.siam import *  # NOQA
>>> verbose = True
>>> arch_tag = ut.get_argval('--archtag', default='siaml2')
>>> data_shape = tuple(ut.get_argval('--datashape', type_=list, default=(64, 64, 3)))
>>> model = SiameseL2(batch_size=128, data_shape=data_shape, arch_tag=arch_tag)
>>> output_layer = model.init_arch()
>>> model.print_model_info_str()
>>> ut.quit_if_noshow()
>>> model.show_arch()
>>> ut.show_if_requested()
learn_encoder(labels, scores, **kwargs)[source]
loss_function(network_output, labels, T=<module 'theano.tensor' from '/Users/jason.parham/virtualenv/wildme3.7/lib/python3.7/site-packages/Theano-1.0.5-py3.7.egg/theano/tensor/__init__.py'>, verbose=True)[source]

Implements the contrastive loss term from (Hasdel, Chopra, LeCun 06)

CommandLine:

python -m wbia_cnn.models.siam –test-SiameseL2.loss_function –show

Example1:
>>> # ENABLE_DOCTEST
>>> from wbia_cnn.models import *  # NOQA
>>> network_output, labels = testdata_siam_desc()
>>> verbose = False
>>> T = np
>>> func = SiameseL2.loss_function
>>> loss, dist_l2 = ut.exec_func_src(func, globals(), locals(), ['loss', 'dist_l2'])
>>> ut.quit_if_noshow()
>>> dist0_l2 = dist_l2[labels]
>>> dist1_l2 = dist_l2[~labels]
>>> loss0 = loss[labels]
>>> loss1 = loss[~labels]
>>> import plottool as pt
>>> pt.plot2(dist0_l2, loss0, 'x', color=pt.TRUE_BLUE, label='imposter_loss', y_label='loss')
>>> pt.plot2(dist1_l2, loss1, 'x', color=pt.FALSE_RED, label='genuine_loss', y_label='loss')
>>> pt.legend()
>>> ut.show_if_requested()
rrr(verbose=True, reload_module=True)

special class reloading function This function is often injected as rrr of classes

wbia_cnn.models.siam.constrastive_loss(dist_l2, labels, margin, T=<module 'theano.tensor' from '/Users/jason.parham/virtualenv/wildme3.7/lib/python3.7/site-packages/Theano-1.0.5-py3.7.egg/theano/tensor/__init__.py'>)[source]
LaTeX:

$(y E)^2 + ((1 - y) max(m - E, 0)^2)$

Parameters
  • dist_l2 (ndarray) – energy of a training example (l2 distance of descriptor pairs)

  • labels (ndarray) – 1 if genuine pair, 0 if imposter pair

  • margin (float) – positive number

  • T (module) – (default = theano.tensor)

Returns

loss

Return type

ndarray

Notes

Carefull, you need to pass the the euclidean distance in here here, NOT the squared euclidean distance otherwise you end up with T.maximum(0, (m ** 2 - 2 * m * d + d ** 2)), which still requires the square root operation

CommandLine:

python -m wbia_cnn.models.siam –test-constrastive_loss –show

Example

>>> # DISABLE_DOCTEST
>>> from wbia_cnn.models.siam import *  # NOQA
>>> dist_l2 = np.linspace(0, 2.5, 200)
>>> labels = np.tile([True, False], 100)
>>> # margin, T = 1.25, np
>>> margin, T = 1.25, np
>>> loss = constrastive_loss(dist_l2, labels, margin, T)
>>> ut.quit_if_noshow()
>>> import plottool as pt
>>> xdat_genuine, ydat_genuine = dist_l2[labels], loss[labels] * 2.0
>>> xdat_imposter, ydat_imposter = dist_l2[~labels], loss[~labels] * 2.0
>>> #pt.presetup_axes(x_label='Energy (D_w)', y_label='Loss (L)', equal_aspect=False)
>>> pt.presetup_axes(x_label='Energy (E)', y_label='Loss (L)', equal_aspect=False)
>>> pt.plot(xdat_genuine, ydat_genuine, '--', lw=2, color=pt.TRUE, label='Correct training pairs')
>>> pt.plot(xdat_imposter, ydat_imposter, '-', lw=2, color=pt.FALSE,  label='Incorrect training pairs')
>>> pt.pad_axes(.03, ylim=(0, 3.5))
>>> pt.postsetup_axes()
>>> ut.show_if_requested()
wbia_cnn.models.siam.ignore_hardest_cases(loss, labels, num_ignore=3, T=<module 'theano.tensor' from '/Users/jason.parham/virtualenv/wildme3.7/lib/python3.7/site-packages/Theano-1.0.5-py3.7.egg/theano/tensor/__init__.py'>)[source]
Parameters
  • loss (theano.Tensor) –

  • labels (theano.Tensor) –

  • num_ignore (int) – (default = 3)

  • T (module) – (default = theano.tensor)

Returns

loss

Return type

theano.Tensor

CommandLine:

python -m wbia_cnn.models.siam –test-ignore_hardest_cases:0 python -m wbia_cnn.models.siam –test-ignore_hardest_cases:1 python -m wbia_cnn.models.siam –test-ignore_hardest_cases:2

Example0:
>>> # ENABLE_DOCTEST
>>> # Test numpy version
>>> from wbia_cnn.models.siam import *  # NOQA
>>> loss_arr   = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8], dtype=np.int32)
>>> labels_arr = np.array([1, 0, 0, 1, 1, 1, 1, 1, 0], dtype=np.int32)
>>> loss   = loss_arr
>>> labels = labels_arr
>>> num_ignore = 2
>>> T = np
>>> ignored_loss_arr = ignore_hardest_cases(loss, labels, num_ignore, T)
>>> result = ('ignored_loss_arr = %s' % (ut.numpy_str(ignored_loss_arr),))
>>> print(result)
ignored_loss = np.array([0, 1, 0, 3, 4, 5, 0, 0, 0], dtype=np.int32)
Example1:
>>> # ENABLE_DOCTEST
>>> # Test theano version
>>> from wbia_cnn.models.siam import *  # NOQA
>>> import theano.tensor
>>> loss_arr   = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8], dtype=np.int32)
>>> labels_arr = np.array([1, 0, 0, 1, 1, 1, 1, 1, 0], dtype=np.int32)
>>> T = theano.tensor
>>> loss = T.ivector(name='loss')
>>> labels = T.ivector(name='labels')
>>> num_ignore = 2
>>> ignored_loss = ignore_hardest_cases(loss, labels, num_ignore, T)
>>> ignored_loss_arr = ignored_loss.eval({loss: loss_arr, labels: labels_arr})
>>> result = ('ignored_loss = %s' % (ut.numpy_str(ignored_loss_arr),))
>>> print(result)
ignored_loss = np.array([0, 1, 0, 3, 4, 5, 0, 0, 0], dtype=np.int32)
Example2:
>>> # ENABLE_DOCTEST
>>> # Test version compatiblity
>>> from wbia_cnn.models.siam import *  # NOQA
>>> import wbia_cnn.theano_ext as theano_ext
>>> import theano.tensor
>>> loss_arr   = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8], dtype=np.int32)
>>> labels_arr = np.array([1, 0, 0, 1, 1, 1, 1, 1, 0], dtype=np.int32)
>>> loss = T.ivector(name='loss')
>>> labels = T.ivector(name='labels')
>>> num_ignore = 2
>>> # build numpy targets
>>> numpy_locals = {'np': np, 'T': np, 'loss': loss_arr, 'labels': labels_arr, 'num_ignore': num_ignore}
>>> func = ignore_hardest_cases
>>> numpy_vars = ut.exec_func_src(func, {}, numpy_locals, None)
>>> numpy_targets = ut.delete_dict_keys(numpy_vars, ['__doc__', 'T', 'np', 'num_ignore'])
>>> # build theano functions
>>> theano_locals = {'np': np, 'T': theano.tensor, 'loss': loss, 'labels': labels, 'num_ignore': num_ignore}
>>> func = ignore_hardest_cases
>>> theano_vars = ut.exec_func_src(func, {}, theano_locals, None)
>>> theano_symbols = ut.delete_dict_keys(theano_vars, ['__doc__', 'T', 'np', 'num_ignore'])
>>> inputs_to_value = {loss: loss_arr, labels: labels_arr}
>>> # Evalute and test consistency
>>> key_order = sorted(list(theano_symbols.keys()))
>>> theano_values = {}
>>> noerror = True
>>> for key in key_order:
...     symbol = theano_symbols[key]
...     print('key=%r' % (key,))
...     theano_value = theano_ext.eval_symbol(symbol, inputs_to_value)
...     theano_values[key] = theano_value
...     prefix = '  * '
...     if not np.all(theano_values[key] == numpy_targets[key]):
...         prefix = ' !!! '
...         noerror = False
...     # Cast to compatible dtype
...     numpy_value = numpy_targets[key]
...     result_dtype = np.result_type(numpy_value, theano_value)
...     numpy_value = numpy_value.astype(result_dtype)
...     theano_value = theano_value.astype(result_dtype)
...     numpy_targets[key] = numpy_value
...     theano_values[key] = theano_value
...     print(prefix + 'numpy_value  = %r' % (numpy_value,))
...     print(prefix + 'theano_value = %r' % (theano_value,))
>>> print('numpy_targets = ' + ut.dict_str(numpy_targets, align=True))
>>> print('theano_values = ' + ut.dict_str(theano_values, align=True))
>>> assert noerror, 'There was an error'
wbia_cnn.models.siam.predict()[source]
wbia_cnn.models.siam.testdata_siam_desc(num_data=128, desc_dim=8)[source]

wbia_cnn.models.viewpoint module

class wbia_cnn.models.viewpoint.ViewpointModel(autoinit=False, batch_size=128, data_shape=96, 96, 3, arch_tag='viewpoint', **kwargs)[source]

Bases: wbia_cnn.models.abstract_models.AbstractCategoricalModel

augment(Xb, yb=None)[source]
init_arch()[source]
label_order_mapping(category_list)[source]
Parameters

category_list (list) –

Returns

category_mapping

Return type

?

CommandLine:

python -m wbia_cnn.models.viewpoint –exec-label_order_mapping

Example

>>> # DISABLE_DOCTEST
>>> from wbia_cnn.models.viewpoint import *  # NOQA
>>> model = ViewpointModel()
>>> category_list = ['LEFT', 'FRONT_LEFT', 'FRONT', 'FRONT_RIGHT', 'RIGHT', 'BACK_RIGHT', 'BACK', 'BACK_LEFT']
>>> category_mapping = model.label_order_mapping(category_list)
>>> result = ('category_mapping = %s' % (str(category_mapping),))
>>> print(result)
learning_rate_shock(x)[source]
learning_rate_update(x)[source]
rrr(verbose=True, reload_module=True)

special class reloading function This function is often injected as rrr of classes

Module contents

wbia_cnn.models.IMPORT_TUPLES = [('_model_legacy', None), ('abstract_models', None), ('aoi2', None), ('background', None), ('classifier', None), ('classifier2', None), ('labeler', None), ('dummy', None), ('mnist', None), ('pretrained', None), ('quality', None), ('siam', None), ('viewpoint', None)]

python -c “import wbia_cnn.models” –dump-wbia_cnn.models-init python -c “import wbia_cnn.models” –update-wbia_cnn.models-init

wbia_cnn.models.reassign_submodule_attributes(verbose=1)[source]

Updates attributes in the __init__ modules with updated attributes in the submodules.

wbia_cnn.models.reload_subs(verbose=1)[source]

Reloads wbia_cnn.models and submodules

wbia_cnn.models.rrrr(verbose=1)

Reloads wbia_cnn.models and submodules