Matmul
Source:R/gen-namespace-docs.R
, R/gen-namespace-examples.R
, R/gen-namespace.R
torch_matmul.Rd
Matmul
matmul(input, other, out=NULL) -> Tensor
Matrix product of two tensors.
The behavior depends on the dimensionality of the tensors as follows:
If both tensors are 1-dimensional, the dot product (scalar) is returned.
If both arguments are 2-dimensional, the matrix-matrix product is returned.
If the first argument is 1-dimensional and the second argument is 2-dimensional, a 1 is prepended to its dimension for the purpose of the matrix multiply. After the matrix multiply, the prepended dimension is removed.
If the first argument is 2-dimensional and the second argument is 1-dimensional, the matrix-vector product is returned.
If both arguments are at least 1-dimensional and at least one argument is N-dimensional (where N > 2), then a batched matrix multiply is returned. If the first argument is 1-dimensional, a 1 is prepended to its dimension for the purpose of the batched matrix multiply and removed after. If the second argument is 1-dimensional, a 1 is appended to its dimension for the purpose of the batched matrix multiple and removed after. The non-matrix (i.e. batch) dimensions are broadcasted (and thus must be broadcastable). For example, if
input
is a \((j \times 1 \times n \times m)\) tensor andother
is a \((k \times m \times p)\) tensor,out
will be an \((j \times k \times n \times p)\) tensor.
Examples
if (torch_is_installed()) {
# vector x vector
tensor1 = torch_randn(c(3))
tensor2 = torch_randn(c(3))
torch_matmul(tensor1, tensor2)
# matrix x vector
tensor1 = torch_randn(c(3, 4))
tensor2 = torch_randn(c(4))
torch_matmul(tensor1, tensor2)
# batched matrix x broadcasted vector
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(4))
torch_matmul(tensor1, tensor2)
# batched matrix x batched matrix
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(10, 4, 5))
torch_matmul(tensor1, tensor2)
# batched matrix x broadcasted matrix
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(4, 5))
torch_matmul(tensor1, tensor2)
}
#> torch_tensor
#> (1,.,.) =
#> -0.4077 0.2451 -0.1143 -0.7054 -0.5042
#> -1.2226 0.6981 -0.6811 -1.8699 -0.6471
#> 1.0530 1.0960 0.5336 -0.1148 1.0540
#>
#> (2,.,.) =
#> 5.2976 1.9356 1.0881 1.4976 5.1379
#> 2.4158 -3.4833 0.9209 6.8367 2.8771
#> 0.0648 -0.7560 1.6180 3.2594 0.3732
#>
#> (3,.,.) =
#> 1.8713 -0.6972 0.3407 2.1596 1.8692
#> -1.7630 -1.4168 -0.3854 0.0093 -2.2281
#> 3.5056 -3.5043 1.2135 7.1145 3.0974
#>
#> (4,.,.) =
#> -0.6596 -2.5338 -1.4705 0.9647 -0.4396
#> 0.2672 -1.0593 2.3533 4.2034 -0.0557
#> -0.8705 -0.2677 -0.1221 -0.5649 -1.2931
#>
#> (5,.,.) =
#> -1.7278 -1.0653 -0.3426 0.9913 -0.3978
#> 0.5915 -1.2822 -1.5748 0.8444 2.0462
#> -0.6534 2.0247 0.2464 -2.7995 -1.1858
#>
#> (6,.,.) =
#> -2.6976 -0.1377 -3.0839 -4.1184 -1.3508
#> -0.9115 1.5129 -2.3088 -5.0526 -0.6699
#> 0.8071 1.2670 -1.6677 -3.7896 0.1341
#>
#> ... [the output was truncated (use n=-1 to disable)]
#> [ CPUFloatType{10,3,5} ]