--- title: Experiments with loading 01 keywords: fastai sidebar: home_sidebar summary: "Contains some experiments with implemented functions" description: "Contains some experiments with implemented functions" nb_path: "nbs/03_load_tests.ipynb" ---
{% raw %}
{% endraw %} {% raw %}
# from fastai.tabular.all import *
# import json
# import torch
# from torch.autograd import Variable

# from transfertab.utils import *
# from transfertab.core import *
# from srsly import ujson

# import wandb
# from fastai.callback.wandb import WandbCallback


# wandb.init(settings=wandb.Settings(start_method="thread"))
wandb: wandb version 0.10.32 is available!  To upgrade, please run:
wandb:  $ pip install wandb --upgrade
Tracking run with wandb version 0.10.24
Syncing run kind-snow-1 to Weights & Biases (Documentation).
Project page: https://wandb.ai/ishitadatta/transfertab-nbs
Run page: https://wandb.ai/ishitadatta/transfertab-nbs/runs/29wlpmae
Run data is saved locally in C:\Users\HP\transfertab\nbs\wandb\run-20210617_153338-29wlpmae

Run(29wlpmae)

{% endraw %} {% raw %}
# df2 = pd.read_csv('../data/toy_dataset.csv')
# splits2 = RandomSplitter(valid_pct=0.2)(range_of(df2))
# to2 = TabularPandas(df2, procs=[Categorify, FillMissing,Normalize],
#                    cat_names = ['City','sex'],
#                    cont_names = ['Number', 'Income', 'Age'],
#                    y_names='Illness',
#                    splits=splits2)
# dls2 = to2.dataloaders(bs=64)
# learn2 = tabular_learner(dls2, metrics=accuracy, cbs=WandbCallback(log_dataset=True, log_model=True))
{% endraw %} {% raw %}
# learn2.fit(1)
WandbCallback could not retrieve the dataset path, please provide it explicitly to "log_dataset"
WandbCallback requires use of "SaveModelCallback" to log best model
epoch train_loss valid_loss accuracy time
0 0.265406 0.287556 0.918533 00:33
D:\Anaconda\lib\site-packages\torch\nn\modules\module.py:795: UserWarning: Using a non-full backward hook when the forward contains multiple autograd Nodes is deprecated and will be removed in future versions. This hook will be missing some grad_input. Please use register_full_backward_hook to get the documented behavior.
  warnings.warn("Using a non-full backward hook when the forward contains multiple autograd Nodes "
{% endraw %} {% raw %}
# row, clas, probs = learn2.predict(df2.iloc[-1])
# row, clas, probs
(   City  sex    Number    Income       Age  Illness
 0   1.0  1.0  1.730422 -0.160174 -0.687693      0.0,
 tensor(0),
 tensor([0.9303, 0.0697]))
{% endraw %} {% raw %}
# tab_obj = TabTransfer(learn2)
{% endraw %} {% raw %}
 
{% endraw %} {% raw %}
 
tensor([[ 0.0078, -0.0057,  0.0079],
        [ 0.0084, -0.0047,  0.0006],
        [-0.0068, -0.0055,  0.0075]])
mean is tensor([ 0.0031, -0.0053,  0.0053]) for tensor([[ 0.0078, -0.0057,  0.0079],
        [ 0.0084, -0.0047,  0.0006],
        [-0.0068, -0.0055,  0.0075]])
0, <class 'int'>
Transferring weights for class #na#, cat sex using mean
old weight for class is tensor([ 0.0031, -0.0053,  0.0053], grad_fn=<SliceBackward>)
new weight for class is tensor([ 0.0031, -0.0053,  0.0053], grad_fn=<SliceBackward>)
1, <class 'int'>
Transferring weights for class Female, cat sex from previous weights
old weight for class is tensor([ 0.0078, -0.0057,  0.0079], grad_fn=<SliceBackward>)
new weight for class is tensor([ 0.0078, -0.0057,  0.0079], grad_fn=<SliceBackward>)
2, <class 'int'>
Transferring weights for class Male, cat sex from previous weights
old weight for class is tensor([ 0.0084, -0.0047,  0.0006], grad_fn=<SliceBackward>)
new weight for class is tensor([ 0.0084, -0.0047,  0.0006], grad_fn=<SliceBackward>)
{% endraw %} {% raw %}
 
{% endraw %} {% raw %}
 
{% endraw %} {% raw %}
 
{% endraw %} {% raw %}
 
City sex Number Income Age Illness
0 Austin Female 150000.001614 87251.00002 37.0 No
{% endraw %} {% raw %}
 
(tensor(0), tensor([0.9316, 0.0684]))
{% endraw %}
===============================
{% raw %}
 
{% endraw %} {% raw %}
 
City sex Number Income Age Illness
0 Austin Female 150000.001614 87251.00002 37.0 No
{% endraw %} {% raw %}
 
(tensor(0), tensor([0.9018, 0.0982]))
{% endraw %} {% raw %}
 
{% endraw %} {% raw %}
 
City sex Number Income Age Illness
0 Austin Female 150000.001614 87251.00002 37.0 No
{% endraw %} {% raw %}
 
tensor([0.9316, 0.0684])
{% endraw %} {% raw %}
# wandb.finish()

Waiting for W&B process to finish, PID 0
Program ended successfully.
Find user logs for this run at: C:\Users\HP\transfertab\nbs\wandb\run-20210617_153338-29wlpmae\logs\debug.log
Find internal logs for this run at: C:\Users\HP\transfertab\nbs\wandb\run-20210617_153338-29wlpmae\logs\debug-internal.log

Run summary:


epoch1
train_loss0.26541
raw_loss0.28508
wd_00.01
sqr_mom_00.99
lr_00.001
mom_00.9
eps_01e-05
_runtime415
_timestamp1623924633
_step1874
valid_loss0.28756
accuracy0.91853

Run history:


epoch▁▁▁▁▂▂▂▂▂▃▃▃▃▃▃▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███
train_loss█▇▅▃▂▂▁▂▁▁▁▂▁▂▁▁▁▂▁▁▁▂▂▁▁▂▂▁▁▁▁▁▁▁▁▁▁▁▁▁
raw_loss█▇▆▅▃▃▄▄▆▂▃▅▂▄▃▂▅▂▅▃▅▂▃▃▁▅▄▅▄▆▅▂▂▄▅▃▅▂▄▂
wd_0▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
sqr_mom_0▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
lr_0▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
mom_0▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
eps_0▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁
_runtime▁▁▁▁▂▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇████
_timestamp▁▁▁▁▂▂▂▂▂▂▃▃▃▃▃▄▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇████
_step▁▁▁▁▂▂▂▂▂▃▃▃▃▃▃▄▄▄▄▄▅▅▅▅▅▅▆▆▆▆▆▇▇▇▇▇▇███
valid_loss
accuracy

Synced 5 W&B file(s), 2 media file(s), 1 artifact file(s) and 0 other file(s)
{% endraw %}