Convenience functions to easily create a `Learner` for text applications
from nbdev.showdoc import *

match_embeds[source]

match_embeds(old_wgts, old_vocab, new_vocab)

Convert the embedding in wgts to go with a new vocabulary.

wgts = {'0.encoder.weight': torch.randn(5,3)}
new_wgts = match_embeds(wgts.copy(), ['a', 'b', 'c'], ['a', 'c', 'd', 'b'])
old,new = wgts['0.encoder.weight'],new_wgts['0.encoder.weight']
test_eq(new[0], old[0])
test_eq(new[1], old[2])
test_eq(new[2], old.mean(0))
test_eq(new[3], old[1])
#With bias
wgts = {'0.encoder.weight': torch.randn(5,3), '1.decoder.bias': torch.randn(5)}
new_wgts = match_embeds(wgts.copy(), ['a', 'b', 'c'], ['a', 'c', 'd', 'b'])
old_w,new_w = wgts['0.encoder.weight'],new_wgts['0.encoder.weight']
old_b,new_b = wgts['1.decoder.bias'],  new_wgts['1.decoder.bias']
test_eq(new_w[0], old_w[0])
test_eq(new_w[1], old_w[2])
test_eq(new_w[2], old_w.mean(0))
test_eq(new_w[3], old_w[1])
test_eq(new_b[0], old_b[0])
test_eq(new_b[1], old_b[2])
test_eq(new_b[2], old_b.mean(0))
test_eq(new_b[3], old_b[1])

load_ignore_keys[source]

load_ignore_keys(model, wgts)

Load wgts in model ignoring the names of the keys, just taking parameters in order

class TextLearner[source]

TextLearner(model, dls, alpha=2.0, beta=1.0, moms=(0.8, 0.7, 0.8), loss_func=None, opt_func='Adam', lr=0.001, splitter='trainable_params', cbs=None, metrics=None, path=None, model_dir='models', wd=None, wd_bn_bias=False, train_bn=True) :: Learner

Basic class for a Learner in NLP.

decode_spec_tokens[source]

decode_spec_tokens(tokens)

class LMLearner[source]

LMLearner(model, dls, alpha=2.0, beta=1.0, moms=(0.8, 0.7, 0.8), loss_func=None, opt_func='Adam', lr=0.001, splitter='trainable_params', cbs=None, metrics=None, path=None, model_dir='models', wd=None, wd_bn_bias=False, train_bn=True) :: TextLearner

Add functionality to TextLearner when dealingwith a language model

language_model_learner[source]

language_model_learner(dls, arch, config=None, drop_mult=1.0, pretrained=True, pretrained_fnames=None, loss_func=None, opt_func='Adam', lr=0.001, splitter='trainable_params', cbs=None, metrics=None, path=None, model_dir='models', wd=None, wd_bn_bias=False, train_bn=True, moms=(0.95, 0.85, 0.95))

Create a Learner with a language model from data and arch.

text_classifier_learner[source]

text_classifier_learner(dls, arch, seq_len=72, config=None, pretrained=True, drop_mult=0.5, n_out=None, lin_ftrs=None, ps=None, max_len=1440, loss_func=None, opt_func='Adam', lr=0.001, splitter='trainable_params', cbs=None, metrics=None, path=None, model_dir='models', wd=None, wd_bn_bias=False, train_bn=True, moms=(0.95, 0.85, 0.95))

Create a Learner with a text classifier from data and arch.