A simple lookup table that looks up embeddings in a fixed dictionary and size.
Usage
nnf_embedding(
input,
weight,
padding_idx = NULL,
max_norm = NULL,
norm_type = 2,
scale_grad_by_freq = FALSE,
sparse = FALSE
)
Arguments
- input
(LongTensor) Tensor containing indices into the embedding matrix
- weight
(Tensor) The embedding matrix with number of rows equal to the maximum possible index + 1, and number of columns equal to the embedding size
- padding_idx
(int, optional) If given, pads the output with the embedding vector at
padding_idx
(initialized to zeros) whenever it encounters the index.- max_norm
(float, optional) If given, each embedding vector with norm larger than
max_norm
is renormalized to have normmax_norm
. Note: this will modifyweight
in-place.- norm_type
(float, optional) The p of the p-norm to compute for the
max_norm
option. Default2
.- scale_grad_by_freq
(boolean, optional) If given, this will scale gradients by the inverse of frequency of the words in the mini-batch. Default
FALSE
.- sparse
(bool, optional) If
TRUE
, gradient w.r.t.weight
will be a sparse tensor. See Notes undernn_embedding
for more details regarding sparse gradients.