--- title: Factorization Machines keywords: fastai sidebar: home_sidebar summary: "Implementation of factorization machine models like FM, DeepFM, AFM, NCF, xDeepFM" description: "Implementation of factorization machine models like FM, DeepFM, AFM, NCF, xDeepFM" nb_path: "nbs/models/tf/fm.ipynb" ---
Factorization Machines (FMs) are a supervised learning approach that enhances the linear regression model by incorporating the second-order feature interactions. Factorization Machine type algorithms are a combination of linear regression and matrix factorization, the cool idea behind this type of algorithm is it aims model interactions between features (a.k.a attributes, explanatory variables) using factorized parameters. By doing so it has the ability to estimate all interactions between features even with extremely sparse data.
Factorization machines (FM) [Rendle, 2010], proposed by Steffen Rendle in 2010, is a supervised algorithm that can be used for classification, regression, and ranking tasks. It quickly took notice and became a popular and impactful method for making predictions and recommendations. Particularly, it is a generalization of the linear regression model and the matrix factorization model. Moreover, it is reminiscent of support vector machines with a polynomial kernel. The strengths of factorization machines over the linear regression and matrix factorization are: (1) it can model χ -way variable interactions, where χ is the number of polynomial order and is usually set to two. (2) A fast optimization algorithm associated with factorization machines can reduce the polynomial computation time to linear complexity, making it extremely efficient especially for high dimensional sparse inputs. For these reasons, factorization machines are widely employed in modern advertisement and products recommendations.
Most recommendation problems assume that we have a consumption/rating dataset formed by a collection of (user, item, rating) tuples. This is the starting point for most variations of Collaborative Filtering algorithms and they have proven to yield nice results; however, in many applications, we have plenty of item metadata (tags, categories, genres) that can be used to make better predictions. This is one of the benefits of using Factorization Machines with feature-rich datasets, for which there is a natural way in which extra features can be included in the model and higher-order interactions can be modeled using the dimensionality parameter d. For sparse datasets, a second-order FM model suffices, since there is not enough information to estimate more complex interactions.
{% raw %} $$f(x) = w_0 + \sum_{p=1}^Pw_px_p + \sum_{p=1}^{P-1}\sum_{q=p+1}^Pw_{p,q}x_px_q$$ {% endraw %}
This model formulation may look familiar — it's simply a quadratic linear regression. However, unlike polynomial linear models which estimate each interaction term separately, FMs instead use factorized interaction parameters: feature interaction weights are represented as the inner product of the two features' latent factor space embeddings:
{% raw %} $$f(x) = w_0 + \sum_{p=1}^Pw_px_p + \sum_{p=1}^{P-1}\sum_{q=p+1}^P\langle v_p,v_q \rangle x_px_q$$ {% endraw %}
This greatly decreases the number of parameters to estimate while at the same time facilitating more accurate estimation by breaking the strict independence criteria between interaction terms. Consider a realistic recommendation data set with 1,000,000 users and 10,000 items. A quadratic linear model would need to estimate U + I + UI ~ 10 billion parameters. A FM model of dimension F=10 would need only U + I + F(U + I) ~ 11 million parameters. Additionally, many common MF algorithms (including SVD++, ALS) can be re-formulated as special cases of the more general/flexible FM model class.
The above equation can be rewritten as:
$$\begin{align*} \hat{y}(\textbf{x}) = w_{0} + \sum_{i=1}^{n} w_{i} x_{i} + \sum_{i=1}^n \sum_{j=i+1}^n \hat{w}_{ij} x_{i} x_{j} \end{align*}$$where,
Factorization machines appeared to be the method which answered the challenge!
Accuracy | Speed | Sparsity | |
---|---|---|---|
Collaborative Filtering | Too Accurate | Suitable | Suitable |
SVM | Too Accurate | Suitable | Unsuitable |
Random Forest/CART | General Accuracy | Unsuitable | Unsuitable |
Factorization Machines (FM) | General Accuracy | Quick | Designed for it |
To learn the FM model, we can use the MSE loss for regression task, the cross entropy loss for classification tasks, and the BPR loss for ranking task. Standard optimizers such as SGD and Adam are viable for optimization.
class
FM_Layer
[source]
FM_Layer
(*args
, **kwargs
) ::Layer
This is the class from which all layers inherit.
A layer is a callable object that takes as input one or more tensors and
that outputs one or more tensors. It involves computation, defined
in the call()
method, and a state (weight variables), defined
either in the constructor __init__()
or in the build()
method.
Users will just instantiate a layer and then treat it as a callable.
Args:
trainable: Boolean, whether the layer's variables should be trainable.
name: String name of the layer.
dtype: The dtype of the layer's computations and weights. Can also be a
tf.keras.mixed_precision.Policy
, which allows the computation and weight
dtype to differ. Default of None
means to use
tf.keras.mixed_precision.global_policy()
, which is a float32 policy
unless set to different value.
dynamic: Set this to True
if your layer should only be run eagerly, and
should not be used to generate a static computation graph.
This would be the case for a Tree-RNN or a recursive network,
for example, or generally for any layer that manipulates tensors
using Python control flow. If False
, we assume that the layer can
safely be used to generate a static computation graph.
Attributes:
name: The name of the layer (string).
dtype: The dtype of the layer's weights.
variable_dtype: Alias of dtype
.
compute_dtype: The dtype of the layer's computations. Layers automatically
cast inputs to this dtype which causes the computations and output to also
be in this dtype. When mixed precision is used with a
tf.keras.mixed_precision.Policy
, this will be different than
variable_dtype
.
dtype_policy: The layer's dtype policy. See the
tf.keras.mixed_precision.Policy
documentation for details.
trainable_weights: List of variables to be included in backprop.
non_trainable_weights: List of variables that should not be
included in backprop.
weights: The concatenation of the lists trainable_weights and
non_trainable_weights (in this order).
trainable: Whether the layer should be trained (boolean), i.e. whether
its potentially-trainable weights should be returned as part of
layer.trainable_weights
.
input_spec: Optional (list of) InputSpec
object(s) specifying the
constraints on inputs that can be accepted by the layer.
We recommend that descendants of Layer
implement the following methods:
__init__()
: Defines custom layer attributes, and creates layer state
variables that do not depend on input shapes, using add_weight()
.build(self, input_shape)
: This method can be used to create weights that
depend on the shape(s) of the input(s), using add_weight()
. __call__()
will automatically build the layer (if it has not been built yet) by
calling build()
.call(self, inputs, *args, **kwargs)
: Called in __call__
after making
sure build()
has been called. call()
performs the logic of applying the
layer to the input tensors (which should be passed in as argument).
Two reserved keyword arguments you can optionally use in call()
are:training
(boolean, whether the call is in inference mode or training
mode). See more details in the layer/model subclassing guidemask
(boolean tensor encoding masked timesteps in the input, used
in RNN layers). See more details in the layer/model subclassing guide
A typical signature for this method is call(self, inputs)
, and user could
optionally add training
and mask
if the layer need them. *args
and
**kwargs
is only useful for future extension when more input parameters
are planned to be added.get_config(self)
: Returns a dictionary containing the configuration used
to initialize this layer. If the keys differ from the arguments
in __init__
, then override from_config(self)
as well.
This method is used when saving
the layer or a model that contains this layer.Examples:
Here's a basic example: a layer with two variables, w
and b
,
that returns y = w . x + b
.
It shows how to implement build()
and call()
.
Variables set as attributes of a layer are tracked as weights
of the layers (in layer.weights
).
class SimpleDense(Layer):
def __init__(self, units=32):
super(SimpleDense, self).__init__()
self.units = units
def build(self, input_shape): # Create the state of the layer (weights)
w_init = tf.random_normal_initializer()
self.w = tf.Variable(
initial_value=w_init(shape=(input_shape[-1], self.units),
dtype='float32'),
trainable=True)
b_init = tf.zeros_initializer()
self.b = tf.Variable(
initial_value=b_init(shape=(self.units,), dtype='float32'),
trainable=True)
def call(self, inputs): # Defines the computation from inputs to outputs
return tf.matmul(inputs, self.w) + self.b
# Instantiates the layer.
linear_layer = SimpleDense(4)
# This will also call `build(input_shape)` and create the weights.
y = linear_layer(tf.ones((2, 2)))
assert len(linear_layer.weights) == 2
# These weights are trainable, so they're listed in `trainable_weights`:
assert len(linear_layer.trainable_weights) == 2
Note that the method add_weight()
offers a shortcut to create weights:
class SimpleDense(Layer):
def __init__(self, units=32):
super(SimpleDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
Besides trainable weights, updated via backpropagation during training,
layers can also have non-trainable weights. These weights are meant to
be updated manually during call()
. Here's a example layer that computes
the running sum of its inputs:
class ComputeSum(Layer):
def __init__(self, input_dim):
super(ComputeSum, self).__init__()
# Create a non-trainable weight.
self.total = tf.Variable(initial_value=tf.zeros((input_dim,)),
trainable=False)
def call(self, inputs):
self.total.assign_add(tf.reduce_sum(inputs, axis=0))
return self.total
my_sum = ComputeSum(2)
x = tf.ones((2, 2))
y = my_sum(x)
print(y.numpy()) # [2. 2.]
y = my_sum(x)
print(y.numpy()) # [4. 4.]
assert my_sum.weights == [my_sum.total]
assert my_sum.non_trainable_weights == [my_sum.total]
assert my_sum.trainable_weights == []
For more information about creating layers, see the guide Making new Layers and Models via subclassing
class
FM
[source]
FM
(*args
, **kwargs
) ::Model
Model
groups layers into an object with training and inference features.
Args:
inputs: The input(s) of the model: a keras.Input
object or list of
keras.Input
objects.
outputs: The output(s) of the model. See Functional API example below.
name: String, the name of the model.
There are two ways to instantiate a Model
:
1 - With the "Functional API", where you start from Input
,
you chain layer calls to specify the model's forward pass,
and finally you create your model from inputs and outputs:
import tensorflow as tf
inputs = tf.keras.Input(shape=(3,))
x = tf.keras.layers.Dense(4, activation=tf.nn.relu)(inputs)
outputs = tf.keras.layers.Dense(5, activation=tf.nn.softmax)(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
Note: Only dicts, lists, and tuples of input tensors are supported. Nested inputs are not supported (e.g. lists of list or dicts of dict).
A new Functional API model can also be created by using the intermediate tensors. This enables you to quickly extract sub-components of the model.
Example:
inputs = keras.Input(shape=(None, None, 3))
processed = keras.layers.RandomCrop(width=32, height=32)(inputs)
conv = keras.layers.Conv2D(filters=2, kernel_size=3)(processed)
pooling = keras.layers.GlobalAveragePooling2D()(conv)
feature = keras.layers.Dense(10)(pooling)
full_model = keras.Model(inputs, feature)
backbone = keras.Model(processed, conv)
activations = keras.Model(conv, feature)
Note that the backbone
and activations
models are not
created with keras.Input
objects, but with the tensors that are originated
from keras.Inputs
objects. Under the hood, the layers and weights will
be shared across these models, so that user can train the full_model
, and
use backbone
or activations
to do feature extraction.
The inputs and outputs of the model can be nested structures of tensors as
well, and the created models are standard Functional API models that support
all the existing APIs.
2 - By subclassing the Model
class: in that case, you should define your
layers in __init__()
and you should implement the model's forward pass
in call()
.
import tensorflow as tf
class MyModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu)
self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)
def call(self, inputs):
x = self.dense1(inputs)
return self.dense2(x)
model = MyModel()
If you subclass Model
, you can optionally have
a training
argument (boolean) in call()
, which you can use to specify
a different behavior in training and inference:
import tensorflow as tf
class MyModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu)
self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)
self.dropout = tf.keras.layers.Dropout(0.5)
def call(self, inputs, training=False):
x = self.dense1(inputs)
if training:
x = self.dropout(x, training=training)
return self.dense2(x)
model = MyModel()
Once the model is created, you can config the model with losses and metrics
with model.compile()
, train the model with model.fit()
, or use the model
to do prediction with model.predict()
.
def test_model():
user_features = {'feat': 'user_id', 'feat_num': 100, 'embed_dim': 8}
seq_features = {'feat': 'item_id', 'feat_num': 100, 'embed_dim': 8}
features = [user_features, seq_features]
model = FM(features, k=8)
model.summary()
test_model()
Model: "model_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_5 (InputLayer) [(None, 2)] 0 fm__layer (FM_Layer) (None, 1) 1801 tf.math.sigmoid (TFOpLambda (None, 1) 0 ) ================================================================= Total params: 1,801 Trainable params: 1,801 Non-trainable params: 0 _________________________________________________________________
Despite effectiveness, FM can be hindered by its modelling of all feature interactions with the same weight, as not all feature interactions are equally useful and predictive. For example, the interactions with useless features may even introduce noises and adversely degrade the performance.
For each | Learn | |
---|---|---|
Linear | feature | a weight |
Poly | feature pair | a weight |
FM | feature | a latent vector |
FFM | feature | multiple latent vectors |
Field-aware factorization machine (FFM) is an extension to FM. It was originally introduced in [2]. The advantage of FFM over FM is that it uses different factorized latent factors for different groups of features. The "group" is called "field" in the context of FFM. Putting features into fields resolves the issue that the latent factors shared by features that intuitively represent different categories of information may not well generalize the correlation.
FFM addresses this issue by splitting the original latent space into smaller latent spaces specific to the fields of the features.
{% raw %} $$\phi(\pmb{w}, \pmb{x}) = w_0 + \sum\limits_{i=1}^n w_i x_i + \sum\limits_{i=1}^n \sum\limits_{j=i + 1}^n \langle \mathbf{v}_{i, f_{2}} \cdot \mathbf{v}_{j, f_{1}} \rangle x_i x_j$$ {% endraw %}
class
FFM_Layer
[source]
FFM_Layer
(*args
, **kwargs
) ::Layer
This is the class from which all layers inherit.
A layer is a callable object that takes as input one or more tensors and
that outputs one or more tensors. It involves computation, defined
in the call()
method, and a state (weight variables), defined
either in the constructor __init__()
or in the build()
method.
Users will just instantiate a layer and then treat it as a callable.
Args:
trainable: Boolean, whether the layer's variables should be trainable.
name: String name of the layer.
dtype: The dtype of the layer's computations and weights. Can also be a
tf.keras.mixed_precision.Policy
, which allows the computation and weight
dtype to differ. Default of None
means to use
tf.keras.mixed_precision.global_policy()
, which is a float32 policy
unless set to different value.
dynamic: Set this to True
if your layer should only be run eagerly, and
should not be used to generate a static computation graph.
This would be the case for a Tree-RNN or a recursive network,
for example, or generally for any layer that manipulates tensors
using Python control flow. If False
, we assume that the layer can
safely be used to generate a static computation graph.
Attributes:
name: The name of the layer (string).
dtype: The dtype of the layer's weights.
variable_dtype: Alias of dtype
.
compute_dtype: The dtype of the layer's computations. Layers automatically
cast inputs to this dtype which causes the computations and output to also
be in this dtype. When mixed precision is used with a
tf.keras.mixed_precision.Policy
, this will be different than
variable_dtype
.
dtype_policy: The layer's dtype policy. See the
tf.keras.mixed_precision.Policy
documentation for details.
trainable_weights: List of variables to be included in backprop.
non_trainable_weights: List of variables that should not be
included in backprop.
weights: The concatenation of the lists trainable_weights and
non_trainable_weights (in this order).
trainable: Whether the layer should be trained (boolean), i.e. whether
its potentially-trainable weights should be returned as part of
layer.trainable_weights
.
input_spec: Optional (list of) InputSpec
object(s) specifying the
constraints on inputs that can be accepted by the layer.
We recommend that descendants of Layer
implement the following methods:
__init__()
: Defines custom layer attributes, and creates layer state
variables that do not depend on input shapes, using add_weight()
.build(self, input_shape)
: This method can be used to create weights that
depend on the shape(s) of the input(s), using add_weight()
. __call__()
will automatically build the layer (if it has not been built yet) by
calling build()
.call(self, inputs, *args, **kwargs)
: Called in __call__
after making
sure build()
has been called. call()
performs the logic of applying the
layer to the input tensors (which should be passed in as argument).
Two reserved keyword arguments you can optionally use in call()
are:training
(boolean, whether the call is in inference mode or training
mode). See more details in the layer/model subclassing guidemask
(boolean tensor encoding masked timesteps in the input, used
in RNN layers). See more details in the layer/model subclassing guide
A typical signature for this method is call(self, inputs)
, and user could
optionally add training
and mask
if the layer need them. *args
and
**kwargs
is only useful for future extension when more input parameters
are planned to be added.get_config(self)
: Returns a dictionary containing the configuration used
to initialize this layer. If the keys differ from the arguments
in __init__
, then override from_config(self)
as well.
This method is used when saving
the layer or a model that contains this layer.Examples:
Here's a basic example: a layer with two variables, w
and b
,
that returns y = w . x + b
.
It shows how to implement build()
and call()
.
Variables set as attributes of a layer are tracked as weights
of the layers (in layer.weights
).
class SimpleDense(Layer):
def __init__(self, units=32):
super(SimpleDense, self).__init__()
self.units = units
def build(self, input_shape): # Create the state of the layer (weights)
w_init = tf.random_normal_initializer()
self.w = tf.Variable(
initial_value=w_init(shape=(input_shape[-1], self.units),
dtype='float32'),
trainable=True)
b_init = tf.zeros_initializer()
self.b = tf.Variable(
initial_value=b_init(shape=(self.units,), dtype='float32'),
trainable=True)
def call(self, inputs): # Defines the computation from inputs to outputs
return tf.matmul(inputs, self.w) + self.b
# Instantiates the layer.
linear_layer = SimpleDense(4)
# This will also call `build(input_shape)` and create the weights.
y = linear_layer(tf.ones((2, 2)))
assert len(linear_layer.weights) == 2
# These weights are trainable, so they're listed in `trainable_weights`:
assert len(linear_layer.trainable_weights) == 2
Note that the method add_weight()
offers a shortcut to create weights:
class SimpleDense(Layer):
def __init__(self, units=32):
super(SimpleDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
Besides trainable weights, updated via backpropagation during training,
layers can also have non-trainable weights. These weights are meant to
be updated manually during call()
. Here's a example layer that computes
the running sum of its inputs:
class ComputeSum(Layer):
def __init__(self, input_dim):
super(ComputeSum, self).__init__()
# Create a non-trainable weight.
self.total = tf.Variable(initial_value=tf.zeros((input_dim,)),
trainable=False)
def call(self, inputs):
self.total.assign_add(tf.reduce_sum(inputs, axis=0))
return self.total
my_sum = ComputeSum(2)
x = tf.ones((2, 2))
y = my_sum(x)
print(y.numpy()) # [2. 2.]
y = my_sum(x)
print(y.numpy()) # [4. 4.]
assert my_sum.weights == [my_sum.total]
assert my_sum.non_trainable_weights == [my_sum.total]
assert my_sum.trainable_weights == []
For more information about creating layers, see the guide Making new Layers and Models via subclassing
class
FFM
[source]
FFM
(*args
, **kwargs
) ::Model
Model
groups layers into an object with training and inference features.
Args:
inputs: The input(s) of the model: a keras.Input
object or list of
keras.Input
objects.
outputs: The output(s) of the model. See Functional API example below.
name: String, the name of the model.
There are two ways to instantiate a Model
:
1 - With the "Functional API", where you start from Input
,
you chain layer calls to specify the model's forward pass,
and finally you create your model from inputs and outputs:
import tensorflow as tf
inputs = tf.keras.Input(shape=(3,))
x = tf.keras.layers.Dense(4, activation=tf.nn.relu)(inputs)
outputs = tf.keras.layers.Dense(5, activation=tf.nn.softmax)(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
Note: Only dicts, lists, and tuples of input tensors are supported. Nested inputs are not supported (e.g. lists of list or dicts of dict).
A new Functional API model can also be created by using the intermediate tensors. This enables you to quickly extract sub-components of the model.
Example:
inputs = keras.Input(shape=(None, None, 3))
processed = keras.layers.RandomCrop(width=32, height=32)(inputs)
conv = keras.layers.Conv2D(filters=2, kernel_size=3)(processed)
pooling = keras.layers.GlobalAveragePooling2D()(conv)
feature = keras.layers.Dense(10)(pooling)
full_model = keras.Model(inputs, feature)
backbone = keras.Model(processed, conv)
activations = keras.Model(conv, feature)
Note that the backbone
and activations
models are not
created with keras.Input
objects, but with the tensors that are originated
from keras.Inputs
objects. Under the hood, the layers and weights will
be shared across these models, so that user can train the full_model
, and
use backbone
or activations
to do feature extraction.
The inputs and outputs of the model can be nested structures of tensors as
well, and the created models are standard Functional API models that support
all the existing APIs.
2 - By subclassing the Model
class: in that case, you should define your
layers in __init__()
and you should implement the model's forward pass
in call()
.
import tensorflow as tf
class MyModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu)
self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)
def call(self, inputs):
x = self.dense1(inputs)
return self.dense2(x)
model = MyModel()
If you subclass Model
, you can optionally have
a training
argument (boolean) in call()
, which you can use to specify
a different behavior in training and inference:
import tensorflow as tf
class MyModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu)
self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)
self.dropout = tf.keras.layers.Dropout(0.5)
def call(self, inputs, training=False):
x = self.dense1(inputs)
if training:
x = self.dropout(x, training=training)
return self.dense2(x)
model = MyModel()
Once the model is created, you can config the model with losses and metrics
with model.compile()
, train the model with model.fit()
, or use the model
to do prediction with model.predict()
.
def test_model():
user_features = {'feat': 'user_id', 'feat_num': 100, 'embed_dim': 8}
seq_features = {'feat': 'item_id', 'feat_num': 100, 'embed_dim': 8}
features = [user_features, seq_features]
model = FFM(features, k=8)
model.summary()
test_model()
Model: "model_2" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_6 (InputLayer) [(None, 2)] 0 ffm__layer (FFM_Layer) (None, 1) 3401 tf.math.sigmoid_1 (TFOpLamb (None, 1) 0 da) ================================================================= Total params: 3,401 Trainable params: 3,401 Non-trainable params: 0 _________________________________________________________________
NFM seamlessly combines the linearity of FM in modelling second-order feature interactions and the non-linearity of neural network in modelling higher-order feature interactions. Conceptually, NFM is more expressive than FM since FM can be seen as a special case of NFM without hidden layers.
class
DNN
[source]
DNN
(*args
, **kwargs
) ::Layer
This is the class from which all layers inherit.
A layer is a callable object that takes as input one or more tensors and
that outputs one or more tensors. It involves computation, defined
in the call()
method, and a state (weight variables), defined
either in the constructor __init__()
or in the build()
method.
Users will just instantiate a layer and then treat it as a callable.
Args:
trainable: Boolean, whether the layer's variables should be trainable.
name: String name of the layer.
dtype: The dtype of the layer's computations and weights. Can also be a
tf.keras.mixed_precision.Policy
, which allows the computation and weight
dtype to differ. Default of None
means to use
tf.keras.mixed_precision.global_policy()
, which is a float32 policy
unless set to different value.
dynamic: Set this to True
if your layer should only be run eagerly, and
should not be used to generate a static computation graph.
This would be the case for a Tree-RNN or a recursive network,
for example, or generally for any layer that manipulates tensors
using Python control flow. If False
, we assume that the layer can
safely be used to generate a static computation graph.
Attributes:
name: The name of the layer (string).
dtype: The dtype of the layer's weights.
variable_dtype: Alias of dtype
.
compute_dtype: The dtype of the layer's computations. Layers automatically
cast inputs to this dtype which causes the computations and output to also
be in this dtype. When mixed precision is used with a
tf.keras.mixed_precision.Policy
, this will be different than
variable_dtype
.
dtype_policy: The layer's dtype policy. See the
tf.keras.mixed_precision.Policy
documentation for details.
trainable_weights: List of variables to be included in backprop.
non_trainable_weights: List of variables that should not be
included in backprop.
weights: The concatenation of the lists trainable_weights and
non_trainable_weights (in this order).
trainable: Whether the layer should be trained (boolean), i.e. whether
its potentially-trainable weights should be returned as part of
layer.trainable_weights
.
input_spec: Optional (list of) InputSpec
object(s) specifying the
constraints on inputs that can be accepted by the layer.
We recommend that descendants of Layer
implement the following methods:
__init__()
: Defines custom layer attributes, and creates layer state
variables that do not depend on input shapes, using add_weight()
.build(self, input_shape)
: This method can be used to create weights that
depend on the shape(s) of the input(s), using add_weight()
. __call__()
will automatically build the layer (if it has not been built yet) by
calling build()
.call(self, inputs, *args, **kwargs)
: Called in __call__
after making
sure build()
has been called. call()
performs the logic of applying the
layer to the input tensors (which should be passed in as argument).
Two reserved keyword arguments you can optionally use in call()
are:training
(boolean, whether the call is in inference mode or training
mode). See more details in the layer/model subclassing guidemask
(boolean tensor encoding masked timesteps in the input, used
in RNN layers). See more details in the layer/model subclassing guide
A typical signature for this method is call(self, inputs)
, and user could
optionally add training
and mask
if the layer need them. *args
and
**kwargs
is only useful for future extension when more input parameters
are planned to be added.get_config(self)
: Returns a dictionary containing the configuration used
to initialize this layer. If the keys differ from the arguments
in __init__
, then override from_config(self)
as well.
This method is used when saving
the layer or a model that contains this layer.Examples:
Here's a basic example: a layer with two variables, w
and b
,
that returns y = w . x + b
.
It shows how to implement build()
and call()
.
Variables set as attributes of a layer are tracked as weights
of the layers (in layer.weights
).
class SimpleDense(Layer):
def __init__(self, units=32):
super(SimpleDense, self).__init__()
self.units = units
def build(self, input_shape): # Create the state of the layer (weights)
w_init = tf.random_normal_initializer()
self.w = tf.Variable(
initial_value=w_init(shape=(input_shape[-1], self.units),
dtype='float32'),
trainable=True)
b_init = tf.zeros_initializer()
self.b = tf.Variable(
initial_value=b_init(shape=(self.units,), dtype='float32'),
trainable=True)
def call(self, inputs): # Defines the computation from inputs to outputs
return tf.matmul(inputs, self.w) + self.b
# Instantiates the layer.
linear_layer = SimpleDense(4)
# This will also call `build(input_shape)` and create the weights.
y = linear_layer(tf.ones((2, 2)))
assert len(linear_layer.weights) == 2
# These weights are trainable, so they're listed in `trainable_weights`:
assert len(linear_layer.trainable_weights) == 2
Note that the method add_weight()
offers a shortcut to create weights:
class SimpleDense(Layer):
def __init__(self, units=32):
super(SimpleDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
Besides trainable weights, updated via backpropagation during training,
layers can also have non-trainable weights. These weights are meant to
be updated manually during call()
. Here's a example layer that computes
the running sum of its inputs:
class ComputeSum(Layer):
def __init__(self, input_dim):
super(ComputeSum, self).__init__()
# Create a non-trainable weight.
self.total = tf.Variable(initial_value=tf.zeros((input_dim,)),
trainable=False)
def call(self, inputs):
self.total.assign_add(tf.reduce_sum(inputs, axis=0))
return self.total
my_sum = ComputeSum(2)
x = tf.ones((2, 2))
y = my_sum(x)
print(y.numpy()) # [2. 2.]
y = my_sum(x)
print(y.numpy()) # [4. 4.]
assert my_sum.weights == [my_sum.total]
assert my_sum.non_trainable_weights == [my_sum.total]
assert my_sum.trainable_weights == []
For more information about creating layers, see the guide Making new Layers and Models via subclassing
class
NFM
[source]
NFM
(*args
, **kwargs
) ::Model
Model
groups layers into an object with training and inference features.
Args:
inputs: The input(s) of the model: a keras.Input
object or list of
keras.Input
objects.
outputs: The output(s) of the model. See Functional API example below.
name: String, the name of the model.
There are two ways to instantiate a Model
:
1 - With the "Functional API", where you start from Input
,
you chain layer calls to specify the model's forward pass,
and finally you create your model from inputs and outputs:
import tensorflow as tf
inputs = tf.keras.Input(shape=(3,))
x = tf.keras.layers.Dense(4, activation=tf.nn.relu)(inputs)
outputs = tf.keras.layers.Dense(5, activation=tf.nn.softmax)(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
Note: Only dicts, lists, and tuples of input tensors are supported. Nested inputs are not supported (e.g. lists of list or dicts of dict).
A new Functional API model can also be created by using the intermediate tensors. This enables you to quickly extract sub-components of the model.
Example:
inputs = keras.Input(shape=(None, None, 3))
processed = keras.layers.RandomCrop(width=32, height=32)(inputs)
conv = keras.layers.Conv2D(filters=2, kernel_size=3)(processed)
pooling = keras.layers.GlobalAveragePooling2D()(conv)
feature = keras.layers.Dense(10)(pooling)
full_model = keras.Model(inputs, feature)
backbone = keras.Model(processed, conv)
activations = keras.Model(conv, feature)
Note that the backbone
and activations
models are not
created with keras.Input
objects, but with the tensors that are originated
from keras.Inputs
objects. Under the hood, the layers and weights will
be shared across these models, so that user can train the full_model
, and
use backbone
or activations
to do feature extraction.
The inputs and outputs of the model can be nested structures of tensors as
well, and the created models are standard Functional API models that support
all the existing APIs.
2 - By subclassing the Model
class: in that case, you should define your
layers in __init__()
and you should implement the model's forward pass
in call()
.
import tensorflow as tf
class MyModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu)
self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)
def call(self, inputs):
x = self.dense1(inputs)
return self.dense2(x)
model = MyModel()
If you subclass Model
, you can optionally have
a training
argument (boolean) in call()
, which you can use to specify
a different behavior in training and inference:
import tensorflow as tf
class MyModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu)
self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)
self.dropout = tf.keras.layers.Dropout(0.5)
def call(self, inputs, training=False):
x = self.dense1(inputs)
if training:
x = self.dropout(x, training=training)
return self.dense2(x)
model = MyModel()
Once the model is created, you can config the model with losses and metrics
with model.compile()
, train the model with model.fit()
, or use the model
to do prediction with model.predict()
.
def test_model():
user_features = {'feat': 'user_id', 'feat_num': 100, 'embed_dim': 8}
seq_features = {'feat': 'item_id', 'feat_num': 100, 'embed_dim': 8}
features = [user_features, seq_features]
model = NFM(features, hidden_units=[8, 4, 2], dnn_dropout=0.5)
model.summary()
test_model()
Model: "model_3" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_7 (InputLayer) [(None, 2)] 0 [] tf.__operators__.getitem (Slic (None,) 0 ['input_7[0][0]'] ingOpLambda) tf.__operators__.getitem_1 (Sl (None,) 0 ['input_7[0][0]'] icingOpLambda) embedding_3 (Embedding) (None, 8) 800 ['tf.__operators__.getitem[0][0]' ] embedding_4 (Embedding) (None, 8) 800 ['tf.__operators__.getitem_1[0][0 ]'] tf.convert_to_tensor (TFOpLamb (2, None, 8) 0 ['embedding_3[0][0]', da) 'embedding_4[0][0]'] tf.compat.v1.transpose (TFOpLa (None, 2, 8) 0 ['tf.convert_to_tensor[0][0]'] mbda) tf.math.reduce_sum_5 (TFOpLamb (None, 8) 0 ['tf.compat.v1.transpose[0][0]'] da) tf.math.pow_1 (TFOpLambda) (None, 2, 8) 0 ['tf.compat.v1.transpose[0][0]'] tf.math.pow (TFOpLambda) (None, 8) 0 ['tf.math.reduce_sum_5[0][0]'] tf.math.reduce_sum_6 (TFOpLamb (None, 8) 0 ['tf.math.pow_1[0][0]'] da) tf.math.subtract_5 (TFOpLambda (None, 8) 0 ['tf.math.pow[0][0]', ) 'tf.math.reduce_sum_6[0][0]'] tf.math.multiply_4 (TFOpLambda (None, 8) 0 ['tf.math.subtract_5[0][0]'] ) batch_normalization (BatchNorm (None, 8) 32 ['tf.math.multiply_4[0][0]'] alization) dnn (DNN) (None, 2) 118 ['batch_normalization[0][0]'] dense_3 (Dense) (None, 1) 3 ['dnn[0][0]'] tf.math.sigmoid_2 (TFOpLambda) (None, 1) 0 ['dense_3[0][0]'] ================================================================================================== Total params: 1,753 Trainable params: 1,737 Non-trainable params: 16 __________________________________________________________________________________________________
Improves FM by discriminating the importance of different feature interactions. It learns the importance of each feature interaction from data via a neural attention network. Empirically, it is shown on regression task AFM betters FM with a 8.6% relative improvement, and consistently outperforms the state-of-the-art deep learning methods Wide&Deep and DeepCross with a much simpler structure and fewer model parameters.
Formally, the AFM model can be defined as:
{% raw %} $$\hat{y}_{AFM} (x) = w_0 + \sum_{i=1}^nw_ix_i + p^T\sum_{i=1}^n\sum_{j=i+1}^na_{ij}(v_i\odot v_j)x_ix_j$$ {% endraw %}
class
AFM
[source]
AFM
(*args
, **kwargs
) ::Model
Model
groups layers into an object with training and inference features.
Args:
inputs: The input(s) of the model: a keras.Input
object or list of
keras.Input
objects.
outputs: The output(s) of the model. See Functional API example below.
name: String, the name of the model.
There are two ways to instantiate a Model
:
1 - With the "Functional API", where you start from Input
,
you chain layer calls to specify the model's forward pass,
and finally you create your model from inputs and outputs:
import tensorflow as tf
inputs = tf.keras.Input(shape=(3,))
x = tf.keras.layers.Dense(4, activation=tf.nn.relu)(inputs)
outputs = tf.keras.layers.Dense(5, activation=tf.nn.softmax)(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
Note: Only dicts, lists, and tuples of input tensors are supported. Nested inputs are not supported (e.g. lists of list or dicts of dict).
A new Functional API model can also be created by using the intermediate tensors. This enables you to quickly extract sub-components of the model.
Example:
inputs = keras.Input(shape=(None, None, 3))
processed = keras.layers.RandomCrop(width=32, height=32)(inputs)
conv = keras.layers.Conv2D(filters=2, kernel_size=3)(processed)
pooling = keras.layers.GlobalAveragePooling2D()(conv)
feature = keras.layers.Dense(10)(pooling)
full_model = keras.Model(inputs, feature)
backbone = keras.Model(processed, conv)
activations = keras.Model(conv, feature)
Note that the backbone
and activations
models are not
created with keras.Input
objects, but with the tensors that are originated
from keras.Inputs
objects. Under the hood, the layers and weights will
be shared across these models, so that user can train the full_model
, and
use backbone
or activations
to do feature extraction.
The inputs and outputs of the model can be nested structures of tensors as
well, and the created models are standard Functional API models that support
all the existing APIs.
2 - By subclassing the Model
class: in that case, you should define your
layers in __init__()
and you should implement the model's forward pass
in call()
.
import tensorflow as tf
class MyModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu)
self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)
def call(self, inputs):
x = self.dense1(inputs)
return self.dense2(x)
model = MyModel()
If you subclass Model
, you can optionally have
a training
argument (boolean) in call()
, which you can use to specify
a different behavior in training and inference:
import tensorflow as tf
class MyModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu)
self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)
self.dropout = tf.keras.layers.Dropout(0.5)
def call(self, inputs, training=False):
x = self.dense1(inputs)
if training:
x = self.dropout(x, training=training)
return self.dense2(x)
model = MyModel()
Once the model is created, you can config the model with losses and metrics
with model.compile()
, train the model with model.fit()
, or use the model
to do prediction with model.predict()
.
def test_model():
user_features = {'feat': 'user_id', 'feat_num': 100, 'embed_dim': 8}
seq_features = {'feat': 'item_id', 'feat_num': 100, 'embed_dim': 8}
features = [user_features, seq_features]
model = AFM(features, mode='att', att_vector=8,
activation='relu', dropout=0.5, embed_reg=1e-5)
model.summary()
test_model()
Model: "model_4" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_8 (InputLayer) [(None, 2)] 0 [] tf.__operators__.getitem_2 (Sl (None,) 0 ['input_8[0][0]'] icingOpLambda) tf.__operators__.getitem_3 (Sl (None,) 0 ['input_8[0][0]'] icingOpLambda) embedding_5 (Embedding) (None, 8) 800 ['tf.__operators__.getitem_2[0][0 ]'] embedding_6 (Embedding) (None, 8) 800 ['tf.__operators__.getitem_3[0][0 ]'] tf.convert_to_tensor_1 (TFOpLa (2, None, 8) 0 ['embedding_5[0][0]', mbda) 'embedding_6[0][0]'] tf.compat.v1.transpose_1 (TFOp (None, 2, 8) 0 ['tf.convert_to_tensor_1[0][0]'] Lambda) tf.compat.v1.gather (TFOpLambd (None, 1, 8) 0 ['tf.compat.v1.transpose_1[0][0]' a) ] tf.compat.v1.gather_1 (TFOpLam (None, 1, 8) 0 ['tf.compat.v1.transpose_1[0][0]' bda) ] tf.math.multiply_5 (TFOpLambda (None, 1, 8) 0 ['tf.compat.v1.gather[0][0]', ) 'tf.compat.v1.gather_1[0][0]'] dense_4 (Dense) (None, 1, 8) 72 ['tf.math.multiply_5[0][0]'] dense_5 (Dense) (None, 1, 1) 9 ['dense_4[0][0]'] tf.nn.softmax (TFOpLambda) (None, 1, 1) 0 ['dense_5[0][0]'] tf.math.multiply_6 (TFOpLambda (None, 1, 8) 0 ['tf.math.multiply_5[0][0]', ) 'tf.nn.softmax[0][0]'] tf.math.reduce_sum_7 (TFOpLamb (None, 8) 0 ['tf.math.multiply_6[0][0]'] da) dense_6 (Dense) (None, 1) 9 ['tf.math.reduce_sum_7[0][0]'] tf.math.sigmoid_3 (TFOpLambda) (None, 1) 0 ['dense_6[0][0]'] ================================================================================================== Total params: 1,690 Trainable params: 1,690 Non-trainable params: 0 __________________________________________________________________________________________________
DeepFM consists of an FM component and a deep component which are integrated in a parallel structure. The FM component is the same as the 2-way factorization machines which is used to model the low-order feature interactions. The deep component is a multi-layered perceptron that is used to capture high-order feature interactions and nonlinearities. These two components share the same inputs/embeddings and their outputs are summed up as the final prediction. It is worth pointing out that the spirit of DeepFM resembles that of the Wide & Deep architecture which can capture both memorization and generalization. The advantages of DeepFM over the Wide & Deep model is that it reduces the effort of hand-crafted feature engineering by identifying feature combinations automatically.
class
DeepFM
[source]
DeepFM
(*args
, **kwargs
) ::Model
Model
groups layers into an object with training and inference features.
Args:
inputs: The input(s) of the model: a keras.Input
object or list of
keras.Input
objects.
outputs: The output(s) of the model. See Functional API example below.
name: String, the name of the model.
There are two ways to instantiate a Model
:
1 - With the "Functional API", where you start from Input
,
you chain layer calls to specify the model's forward pass,
and finally you create your model from inputs and outputs:
import tensorflow as tf
inputs = tf.keras.Input(shape=(3,))
x = tf.keras.layers.Dense(4, activation=tf.nn.relu)(inputs)
outputs = tf.keras.layers.Dense(5, activation=tf.nn.softmax)(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
Note: Only dicts, lists, and tuples of input tensors are supported. Nested inputs are not supported (e.g. lists of list or dicts of dict).
A new Functional API model can also be created by using the intermediate tensors. This enables you to quickly extract sub-components of the model.
Example:
inputs = keras.Input(shape=(None, None, 3))
processed = keras.layers.RandomCrop(width=32, height=32)(inputs)
conv = keras.layers.Conv2D(filters=2, kernel_size=3)(processed)
pooling = keras.layers.GlobalAveragePooling2D()(conv)
feature = keras.layers.Dense(10)(pooling)
full_model = keras.Model(inputs, feature)
backbone = keras.Model(processed, conv)
activations = keras.Model(conv, feature)
Note that the backbone
and activations
models are not
created with keras.Input
objects, but with the tensors that are originated
from keras.Inputs
objects. Under the hood, the layers and weights will
be shared across these models, so that user can train the full_model
, and
use backbone
or activations
to do feature extraction.
The inputs and outputs of the model can be nested structures of tensors as
well, and the created models are standard Functional API models that support
all the existing APIs.
2 - By subclassing the Model
class: in that case, you should define your
layers in __init__()
and you should implement the model's forward pass
in call()
.
import tensorflow as tf
class MyModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu)
self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)
def call(self, inputs):
x = self.dense1(inputs)
return self.dense2(x)
model = MyModel()
If you subclass Model
, you can optionally have
a training
argument (boolean) in call()
, which you can use to specify
a different behavior in training and inference:
import tensorflow as tf
class MyModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu)
self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)
self.dropout = tf.keras.layers.Dropout(0.5)
def call(self, inputs, training=False):
x = self.dense1(inputs)
if training:
x = self.dropout(x, training=training)
return self.dense2(x)
model = MyModel()
Once the model is created, you can config the model with losses and metrics
with model.compile()
, train the model with model.fit()
, or use the model
to do prediction with model.predict()
.
def test_model():
user_features = {'feat': 'user_id', 'feat_num': 100, 'embed_dim': 8}
seq_features = {'feat': 'item_id', 'feat_num': 100, 'embed_dim': 8}
features = [user_features, seq_features]
model = DeepFM(features, hidden_units=[8, 4, 2], dnn_dropout=0.5)
model.summary()
test_model()
Model: "model_5" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_9 (InputLayer) [(None, 2)] 0 [] tf.__operators__.getitem_4 (Sl (None,) 0 ['input_9[0][0]'] icingOpLambda) tf.__operators__.getitem_5 (Sl (None,) 0 ['input_9[0][0]'] icingOpLambda) embedding_13 (Embedding) (None, 8) 800 ['tf.__operators__.getitem_4[0][0 ]'] embedding_14 (Embedding) (None, 8) 800 ['tf.__operators__.getitem_5[0][0 ]'] tf.concat (TFOpLambda) (None, 16) 0 ['embedding_13[0][0]', 'embedding_14[0][0]'] tf.reshape (TFOpLambda) (None, 2, 8) 0 ['tf.concat[0][0]'] tf.__operators__.add_3 (TFOpLa (None, 2) 0 ['input_9[0][0]'] mbda) dnn_1 (DNN) (None, 2) 182 ['tf.concat[0][0]'] fm__layer_v2 (FM_Layer_v2) (None, 1) 200 ['tf.reshape[0][0]', 'tf.__operators__.add_3[0][0]'] dense_10 (Dense) (None, 1) 3 ['dnn_1[0][0]'] tf.math.add (TFOpLambda) (None, 1) 0 ['fm__layer_v2[0][0]', 'dense_10[0][0]'] tf.math.sigmoid_4 (TFOpLambda) (None, 1) 0 ['tf.math.add[0][0]'] ================================================================================================== Total params: 1,985 Trainable params: 1,985 Non-trainable params: 0 __________________________________________________________________________________________________
xDeepFM combines the CIN and a classical DNN into one unified model. xDeepFM is able to learn certain bounded-degree feature interactions explicitly; on the other hand, it can learn arbitrary low- and high-order feature interactions implicitly.
The architecture of xDeepFM.
Compressed Interaction Network (CIN) aims to generate feature interactions in an explicit fashion and at the vector-wise level. CIN share some functionalities with convolutional neural networks (CNNs) and recurrent neural networks (RNNs).
Components and architecture of the Compressed Interaction Network (CIN).
class
DNN_v2
[source]
DNN_v2
(*args
, **kwargs
) ::Layer
This is the class from which all layers inherit.
A layer is a callable object that takes as input one or more tensors and
that outputs one or more tensors. It involves computation, defined
in the call()
method, and a state (weight variables), defined
either in the constructor __init__()
or in the build()
method.
Users will just instantiate a layer and then treat it as a callable.
Args:
trainable: Boolean, whether the layer's variables should be trainable.
name: String name of the layer.
dtype: The dtype of the layer's computations and weights. Can also be a
tf.keras.mixed_precision.Policy
, which allows the computation and weight
dtype to differ. Default of None
means to use
tf.keras.mixed_precision.global_policy()
, which is a float32 policy
unless set to different value.
dynamic: Set this to True
if your layer should only be run eagerly, and
should not be used to generate a static computation graph.
This would be the case for a Tree-RNN or a recursive network,
for example, or generally for any layer that manipulates tensors
using Python control flow. If False
, we assume that the layer can
safely be used to generate a static computation graph.
Attributes:
name: The name of the layer (string).
dtype: The dtype of the layer's weights.
variable_dtype: Alias of dtype
.
compute_dtype: The dtype of the layer's computations. Layers automatically
cast inputs to this dtype which causes the computations and output to also
be in this dtype. When mixed precision is used with a
tf.keras.mixed_precision.Policy
, this will be different than
variable_dtype
.
dtype_policy: The layer's dtype policy. See the
tf.keras.mixed_precision.Policy
documentation for details.
trainable_weights: List of variables to be included in backprop.
non_trainable_weights: List of variables that should not be
included in backprop.
weights: The concatenation of the lists trainable_weights and
non_trainable_weights (in this order).
trainable: Whether the layer should be trained (boolean), i.e. whether
its potentially-trainable weights should be returned as part of
layer.trainable_weights
.
input_spec: Optional (list of) InputSpec
object(s) specifying the
constraints on inputs that can be accepted by the layer.
We recommend that descendants of Layer
implement the following methods:
__init__()
: Defines custom layer attributes, and creates layer state
variables that do not depend on input shapes, using add_weight()
.build(self, input_shape)
: This method can be used to create weights that
depend on the shape(s) of the input(s), using add_weight()
. __call__()
will automatically build the layer (if it has not been built yet) by
calling build()
.call(self, inputs, *args, **kwargs)
: Called in __call__
after making
sure build()
has been called. call()
performs the logic of applying the
layer to the input tensors (which should be passed in as argument).
Two reserved keyword arguments you can optionally use in call()
are:training
(boolean, whether the call is in inference mode or training
mode). See more details in the layer/model subclassing guidemask
(boolean tensor encoding masked timesteps in the input, used
in RNN layers). See more details in the layer/model subclassing guide
A typical signature for this method is call(self, inputs)
, and user could
optionally add training
and mask
if the layer need them. *args
and
**kwargs
is only useful for future extension when more input parameters
are planned to be added.get_config(self)
: Returns a dictionary containing the configuration used
to initialize this layer. If the keys differ from the arguments
in __init__
, then override from_config(self)
as well.
This method is used when saving
the layer or a model that contains this layer.Examples:
Here's a basic example: a layer with two variables, w
and b
,
that returns y = w . x + b
.
It shows how to implement build()
and call()
.
Variables set as attributes of a layer are tracked as weights
of the layers (in layer.weights
).
class SimpleDense(Layer):
def __init__(self, units=32):
super(SimpleDense, self).__init__()
self.units = units
def build(self, input_shape): # Create the state of the layer (weights)
w_init = tf.random_normal_initializer()
self.w = tf.Variable(
initial_value=w_init(shape=(input_shape[-1], self.units),
dtype='float32'),
trainable=True)
b_init = tf.zeros_initializer()
self.b = tf.Variable(
initial_value=b_init(shape=(self.units,), dtype='float32'),
trainable=True)
def call(self, inputs): # Defines the computation from inputs to outputs
return tf.matmul(inputs, self.w) + self.b
# Instantiates the layer.
linear_layer = SimpleDense(4)
# This will also call `build(input_shape)` and create the weights.
y = linear_layer(tf.ones((2, 2)))
assert len(linear_layer.weights) == 2
# These weights are trainable, so they're listed in `trainable_weights`:
assert len(linear_layer.trainable_weights) == 2
Note that the method add_weight()
offers a shortcut to create weights:
class SimpleDense(Layer):
def __init__(self, units=32):
super(SimpleDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
Besides trainable weights, updated via backpropagation during training,
layers can also have non-trainable weights. These weights are meant to
be updated manually during call()
. Here's a example layer that computes
the running sum of its inputs:
class ComputeSum(Layer):
def __init__(self, input_dim):
super(ComputeSum, self).__init__()
# Create a non-trainable weight.
self.total = tf.Variable(initial_value=tf.zeros((input_dim,)),
trainable=False)
def call(self, inputs):
self.total.assign_add(tf.reduce_sum(inputs, axis=0))
return self.total
my_sum = ComputeSum(2)
x = tf.ones((2, 2))
y = my_sum(x)
print(y.numpy()) # [2. 2.]
y = my_sum(x)
print(y.numpy()) # [4. 4.]
assert my_sum.weights == [my_sum.total]
assert my_sum.non_trainable_weights == [my_sum.total]
assert my_sum.trainable_weights == []
For more information about creating layers, see the guide Making new Layers and Models via subclassing
class
Linear
[source]
Linear
(*args
, **kwargs
) ::Layer
This is the class from which all layers inherit.
A layer is a callable object that takes as input one or more tensors and
that outputs one or more tensors. It involves computation, defined
in the call()
method, and a state (weight variables), defined
either in the constructor __init__()
or in the build()
method.
Users will just instantiate a layer and then treat it as a callable.
Args:
trainable: Boolean, whether the layer's variables should be trainable.
name: String name of the layer.
dtype: The dtype of the layer's computations and weights. Can also be a
tf.keras.mixed_precision.Policy
, which allows the computation and weight
dtype to differ. Default of None
means to use
tf.keras.mixed_precision.global_policy()
, which is a float32 policy
unless set to different value.
dynamic: Set this to True
if your layer should only be run eagerly, and
should not be used to generate a static computation graph.
This would be the case for a Tree-RNN or a recursive network,
for example, or generally for any layer that manipulates tensors
using Python control flow. If False
, we assume that the layer can
safely be used to generate a static computation graph.
Attributes:
name: The name of the layer (string).
dtype: The dtype of the layer's weights.
variable_dtype: Alias of dtype
.
compute_dtype: The dtype of the layer's computations. Layers automatically
cast inputs to this dtype which causes the computations and output to also
be in this dtype. When mixed precision is used with a
tf.keras.mixed_precision.Policy
, this will be different than
variable_dtype
.
dtype_policy: The layer's dtype policy. See the
tf.keras.mixed_precision.Policy
documentation for details.
trainable_weights: List of variables to be included in backprop.
non_trainable_weights: List of variables that should not be
included in backprop.
weights: The concatenation of the lists trainable_weights and
non_trainable_weights (in this order).
trainable: Whether the layer should be trained (boolean), i.e. whether
its potentially-trainable weights should be returned as part of
layer.trainable_weights
.
input_spec: Optional (list of) InputSpec
object(s) specifying the
constraints on inputs that can be accepted by the layer.
We recommend that descendants of Layer
implement the following methods:
__init__()
: Defines custom layer attributes, and creates layer state
variables that do not depend on input shapes, using add_weight()
.build(self, input_shape)
: This method can be used to create weights that
depend on the shape(s) of the input(s), using add_weight()
. __call__()
will automatically build the layer (if it has not been built yet) by
calling build()
.call(self, inputs, *args, **kwargs)
: Called in __call__
after making
sure build()
has been called. call()
performs the logic of applying the
layer to the input tensors (which should be passed in as argument).
Two reserved keyword arguments you can optionally use in call()
are:training
(boolean, whether the call is in inference mode or training
mode). See more details in the layer/model subclassing guidemask
(boolean tensor encoding masked timesteps in the input, used
in RNN layers). See more details in the layer/model subclassing guide
A typical signature for this method is call(self, inputs)
, and user could
optionally add training
and mask
if the layer need them. *args
and
**kwargs
is only useful for future extension when more input parameters
are planned to be added.get_config(self)
: Returns a dictionary containing the configuration used
to initialize this layer. If the keys differ from the arguments
in __init__
, then override from_config(self)
as well.
This method is used when saving
the layer or a model that contains this layer.Examples:
Here's a basic example: a layer with two variables, w
and b
,
that returns y = w . x + b
.
It shows how to implement build()
and call()
.
Variables set as attributes of a layer are tracked as weights
of the layers (in layer.weights
).
class SimpleDense(Layer):
def __init__(self, units=32):
super(SimpleDense, self).__init__()
self.units = units
def build(self, input_shape): # Create the state of the layer (weights)
w_init = tf.random_normal_initializer()
self.w = tf.Variable(
initial_value=w_init(shape=(input_shape[-1], self.units),
dtype='float32'),
trainable=True)
b_init = tf.zeros_initializer()
self.b = tf.Variable(
initial_value=b_init(shape=(self.units,), dtype='float32'),
trainable=True)
def call(self, inputs): # Defines the computation from inputs to outputs
return tf.matmul(inputs, self.w) + self.b
# Instantiates the layer.
linear_layer = SimpleDense(4)
# This will also call `build(input_shape)` and create the weights.
y = linear_layer(tf.ones((2, 2)))
assert len(linear_layer.weights) == 2
# These weights are trainable, so they're listed in `trainable_weights`:
assert len(linear_layer.trainable_weights) == 2
Note that the method add_weight()
offers a shortcut to create weights:
class SimpleDense(Layer):
def __init__(self, units=32):
super(SimpleDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
Besides trainable weights, updated via backpropagation during training,
layers can also have non-trainable weights. These weights are meant to
be updated manually during call()
. Here's a example layer that computes
the running sum of its inputs:
class ComputeSum(Layer):
def __init__(self, input_dim):
super(ComputeSum, self).__init__()
# Create a non-trainable weight.
self.total = tf.Variable(initial_value=tf.zeros((input_dim,)),
trainable=False)
def call(self, inputs):
self.total.assign_add(tf.reduce_sum(inputs, axis=0))
return self.total
my_sum = ComputeSum(2)
x = tf.ones((2, 2))
y = my_sum(x)
print(y.numpy()) # [2. 2.]
y = my_sum(x)
print(y.numpy()) # [4. 4.]
assert my_sum.weights == [my_sum.total]
assert my_sum.non_trainable_weights == [my_sum.total]
assert my_sum.trainable_weights == []
For more information about creating layers, see the guide Making new Layers and Models via subclassing
class
CIN
[source]
CIN
(*args
, **kwargs
) ::Layer
This is the class from which all layers inherit.
A layer is a callable object that takes as input one or more tensors and
that outputs one or more tensors. It involves computation, defined
in the call()
method, and a state (weight variables), defined
either in the constructor __init__()
or in the build()
method.
Users will just instantiate a layer and then treat it as a callable.
Args:
trainable: Boolean, whether the layer's variables should be trainable.
name: String name of the layer.
dtype: The dtype of the layer's computations and weights. Can also be a
tf.keras.mixed_precision.Policy
, which allows the computation and weight
dtype to differ. Default of None
means to use
tf.keras.mixed_precision.global_policy()
, which is a float32 policy
unless set to different value.
dynamic: Set this to True
if your layer should only be run eagerly, and
should not be used to generate a static computation graph.
This would be the case for a Tree-RNN or a recursive network,
for example, or generally for any layer that manipulates tensors
using Python control flow. If False
, we assume that the layer can
safely be used to generate a static computation graph.
Attributes:
name: The name of the layer (string).
dtype: The dtype of the layer's weights.
variable_dtype: Alias of dtype
.
compute_dtype: The dtype of the layer's computations. Layers automatically
cast inputs to this dtype which causes the computations and output to also
be in this dtype. When mixed precision is used with a
tf.keras.mixed_precision.Policy
, this will be different than
variable_dtype
.
dtype_policy: The layer's dtype policy. See the
tf.keras.mixed_precision.Policy
documentation for details.
trainable_weights: List of variables to be included in backprop.
non_trainable_weights: List of variables that should not be
included in backprop.
weights: The concatenation of the lists trainable_weights and
non_trainable_weights (in this order).
trainable: Whether the layer should be trained (boolean), i.e. whether
its potentially-trainable weights should be returned as part of
layer.trainable_weights
.
input_spec: Optional (list of) InputSpec
object(s) specifying the
constraints on inputs that can be accepted by the layer.
We recommend that descendants of Layer
implement the following methods:
__init__()
: Defines custom layer attributes, and creates layer state
variables that do not depend on input shapes, using add_weight()
.build(self, input_shape)
: This method can be used to create weights that
depend on the shape(s) of the input(s), using add_weight()
. __call__()
will automatically build the layer (if it has not been built yet) by
calling build()
.call(self, inputs, *args, **kwargs)
: Called in __call__
after making
sure build()
has been called. call()
performs the logic of applying the
layer to the input tensors (which should be passed in as argument).
Two reserved keyword arguments you can optionally use in call()
are:training
(boolean, whether the call is in inference mode or training
mode). See more details in the layer/model subclassing guidemask
(boolean tensor encoding masked timesteps in the input, used
in RNN layers). See more details in the layer/model subclassing guide
A typical signature for this method is call(self, inputs)
, and user could
optionally add training
and mask
if the layer need them. *args
and
**kwargs
is only useful for future extension when more input parameters
are planned to be added.get_config(self)
: Returns a dictionary containing the configuration used
to initialize this layer. If the keys differ from the arguments
in __init__
, then override from_config(self)
as well.
This method is used when saving
the layer or a model that contains this layer.Examples:
Here's a basic example: a layer with two variables, w
and b
,
that returns y = w . x + b
.
It shows how to implement build()
and call()
.
Variables set as attributes of a layer are tracked as weights
of the layers (in layer.weights
).
class SimpleDense(Layer):
def __init__(self, units=32):
super(SimpleDense, self).__init__()
self.units = units
def build(self, input_shape): # Create the state of the layer (weights)
w_init = tf.random_normal_initializer()
self.w = tf.Variable(
initial_value=w_init(shape=(input_shape[-1], self.units),
dtype='float32'),
trainable=True)
b_init = tf.zeros_initializer()
self.b = tf.Variable(
initial_value=b_init(shape=(self.units,), dtype='float32'),
trainable=True)
def call(self, inputs): # Defines the computation from inputs to outputs
return tf.matmul(inputs, self.w) + self.b
# Instantiates the layer.
linear_layer = SimpleDense(4)
# This will also call `build(input_shape)` and create the weights.
y = linear_layer(tf.ones((2, 2)))
assert len(linear_layer.weights) == 2
# These weights are trainable, so they're listed in `trainable_weights`:
assert len(linear_layer.trainable_weights) == 2
Note that the method add_weight()
offers a shortcut to create weights:
class SimpleDense(Layer):
def __init__(self, units=32):
super(SimpleDense, self).__init__()
self.units = units
def build(self, input_shape):
self.w = self.add_weight(shape=(input_shape[-1], self.units),
initializer='random_normal',
trainable=True)
self.b = self.add_weight(shape=(self.units,),
initializer='random_normal',
trainable=True)
def call(self, inputs):
return tf.matmul(inputs, self.w) + self.b
Besides trainable weights, updated via backpropagation during training,
layers can also have non-trainable weights. These weights are meant to
be updated manually during call()
. Here's a example layer that computes
the running sum of its inputs:
class ComputeSum(Layer):
def __init__(self, input_dim):
super(ComputeSum, self).__init__()
# Create a non-trainable weight.
self.total = tf.Variable(initial_value=tf.zeros((input_dim,)),
trainable=False)
def call(self, inputs):
self.total.assign_add(tf.reduce_sum(inputs, axis=0))
return self.total
my_sum = ComputeSum(2)
x = tf.ones((2, 2))
y = my_sum(x)
print(y.numpy()) # [2. 2.]
y = my_sum(x)
print(y.numpy()) # [4. 4.]
assert my_sum.weights == [my_sum.total]
assert my_sum.non_trainable_weights == [my_sum.total]
assert my_sum.trainable_weights == []
For more information about creating layers, see the guide Making new Layers and Models via subclassing
class
xDeepFM
[source]
xDeepFM
(*args
, **kwargs
) ::Model
Model
groups layers into an object with training and inference features.
Args:
inputs: The input(s) of the model: a keras.Input
object or list of
keras.Input
objects.
outputs: The output(s) of the model. See Functional API example below.
name: String, the name of the model.
There are two ways to instantiate a Model
:
1 - With the "Functional API", where you start from Input
,
you chain layer calls to specify the model's forward pass,
and finally you create your model from inputs and outputs:
import tensorflow as tf
inputs = tf.keras.Input(shape=(3,))
x = tf.keras.layers.Dense(4, activation=tf.nn.relu)(inputs)
outputs = tf.keras.layers.Dense(5, activation=tf.nn.softmax)(x)
model = tf.keras.Model(inputs=inputs, outputs=outputs)
Note: Only dicts, lists, and tuples of input tensors are supported. Nested inputs are not supported (e.g. lists of list or dicts of dict).
A new Functional API model can also be created by using the intermediate tensors. This enables you to quickly extract sub-components of the model.
Example:
inputs = keras.Input(shape=(None, None, 3))
processed = keras.layers.RandomCrop(width=32, height=32)(inputs)
conv = keras.layers.Conv2D(filters=2, kernel_size=3)(processed)
pooling = keras.layers.GlobalAveragePooling2D()(conv)
feature = keras.layers.Dense(10)(pooling)
full_model = keras.Model(inputs, feature)
backbone = keras.Model(processed, conv)
activations = keras.Model(conv, feature)
Note that the backbone
and activations
models are not
created with keras.Input
objects, but with the tensors that are originated
from keras.Inputs
objects. Under the hood, the layers and weights will
be shared across these models, so that user can train the full_model
, and
use backbone
or activations
to do feature extraction.
The inputs and outputs of the model can be nested structures of tensors as
well, and the created models are standard Functional API models that support
all the existing APIs.
2 - By subclassing the Model
class: in that case, you should define your
layers in __init__()
and you should implement the model's forward pass
in call()
.
import tensorflow as tf
class MyModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu)
self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)
def call(self, inputs):
x = self.dense1(inputs)
return self.dense2(x)
model = MyModel()
If you subclass Model
, you can optionally have
a training
argument (boolean) in call()
, which you can use to specify
a different behavior in training and inference:
import tensorflow as tf
class MyModel(tf.keras.Model):
def __init__(self):
super().__init__()
self.dense1 = tf.keras.layers.Dense(4, activation=tf.nn.relu)
self.dense2 = tf.keras.layers.Dense(5, activation=tf.nn.softmax)
self.dropout = tf.keras.layers.Dropout(0.5)
def call(self, inputs, training=False):
x = self.dense1(inputs)
if training:
x = self.dropout(x, training=training)
return self.dense2(x)
model = MyModel()
Once the model is created, you can config the model with losses and metrics
with model.compile()
, train the model with model.fit()
, or use the model
to do prediction with model.predict()
.
def test_model():
user_features = {'feat': 'user_id', 'feat_num': 100, 'embed_dim': 8}
seq_features = {'feat': 'item_id', 'feat_num': 100, 'embed_dim': 8}
features = [user_features, seq_features]
model = xDeepFM(features, hidden_units=[8, 4, 2], cin_size=[4, 4])
model.summary()
test_model()
WARNING:tensorflow:The following Variables were used in a Lambda layer's call (tf.__operators__.add_7), but are not present in its tracked objects: <tf.Variable 'bias:0' shape=(1,) dtype=float32>. This is a strong indication that the Lambda layer should be rewritten as a subclassed Layer. Model: "model_6" __________________________________________________________________________________________________ Layer (type) Output Shape Param # Connected to ================================================================================================== input_10 (InputLayer) [(None, 2)] 0 [] tf.__operators__.getitem_6 (Sl (None,) 0 ['input_10[0][0]'] icingOpLambda) tf.__operators__.getitem_7 (Sl (None,) 0 ['input_10[0][0]'] icingOpLambda) embedding_19 (Embedding) (None, 8) 800 ['tf.__operators__.getitem_6[0][0 ]'] embedding_20 (Embedding) (None, 8) 800 ['tf.__operators__.getitem_7[0][0 ]'] tf.convert_to_tensor_2 (TFOpLa (2, None, 8) 0 ['embedding_19[0][0]', mbda) 'embedding_20[0][0]'] tf.compat.v1.transpose_2 (TFOp (None, 2, 8) 0 ['tf.convert_to_tensor_2[0][0]'] Lambda) tf.__operators__.add_4 (TFOpLa (None, 2) 0 ['input_10[0][0]'] mbda) cin_2 (CIN) (None, 8) 48 ['tf.compat.v1.transpose_2[0][0]' ] tf.reshape_1 (TFOpLambda) (None, 16) 0 ['tf.compat.v1.transpose_2[0][0]' ] linear_2 (Linear) (None, 1) 200 ['tf.__operators__.add_4[0][0]'] dense_14 (Dense) (None, 1) 9 ['cin_2[0][0]'] dnn_v2 (DNN_v2) (None, 2) 182 ['tf.reshape_1[0][0]'] tf.__operators__.add_5 (TFOpLa (None, 1) 0 ['linear_2[0][0]', mbda) 'dense_14[0][0]'] dense_15 (Dense) (None, 1) 3 ['dnn_v2[0][0]'] tf.__operators__.add_6 (TFOpLa (None, 1) 0 ['tf.__operators__.add_5[0][0]', mbda) 'dense_15[0][0]'] tf.__operators__.add_7 (TFOpLa (None, 1) 0 ['tf.__operators__.add_6[0][0]'] mbda) tf.math.sigmoid_5 (TFOpLambda) (None, 1) 0 ['tf.__operators__.add_7[0][0]'] ================================================================================================== Total params: 2,042 Trainable params: 2,042 Non-trainable params: 0 __________________________________________________________________________________________________