Applies a linear transformation to the incoming data: y = xA^T + b
Arguments
- in_features
size of each input sample
- out_features
size of each output sample
- bias
If set to
FALSE
, the layer will not learn an additive bias. Default:TRUE
Shape
Input:
(N, *, H_in)
where*
means any number of additional dimensions andH_in = in_features
.Output:
(N, *, H_out)
where all but the last dimension are the same shape as the input and :math:H_out = out_features
.
Attributes
weight: the learnable weights of the module of shape
(out_features, in_features)
. The values are initialized from \(U(-\sqrt{k}, \sqrt{k})\)s, where \(k = \frac{1}{\mbox{in\_features}}\)bias: the learnable bias of the module of shape \((\mbox{out\_features})\). If
bias
isTRUE
, the values are initialized from \(\mathcal{U}(-\sqrt{k}, \sqrt{k})\) where \(k = \frac{1}{\mbox{in\_features}}\)
Examples
if (torch_is_installed()) {
m <- nn_linear(20, 30)
input <- torch_randn(128, 20)
output <- m(input)
print(output$size())
}
#> [1] 128 30