Matmul
Source:R/gen-namespace-docs.R
, R/gen-namespace-examples.R
, R/gen-namespace.R
torch_matmul.Rd
Matmul
matmul(input, other, out=NULL) -> Tensor
Matrix product of two tensors.
The behavior depends on the dimensionality of the tensors as follows:
If both tensors are 1-dimensional, the dot product (scalar) is returned.
If both arguments are 2-dimensional, the matrix-matrix product is returned.
If the first argument is 1-dimensional and the second argument is 2-dimensional, a 1 is prepended to its dimension for the purpose of the matrix multiply. After the matrix multiply, the prepended dimension is removed.
If the first argument is 2-dimensional and the second argument is 1-dimensional, the matrix-vector product is returned.
If both arguments are at least 1-dimensional and at least one argument is N-dimensional (where N > 2), then a batched matrix multiply is returned. If the first argument is 1-dimensional, a 1 is prepended to its dimension for the purpose of the batched matrix multiply and removed after. If the second argument is 1-dimensional, a 1 is appended to its dimension for the purpose of the batched matrix multiple and removed after. The non-matrix (i.e. batch) dimensions are broadcasted (and thus must be broadcastable). For example, if
input
is a \((j \times 1 \times n \times m)\) tensor andother
is a \((k \times m \times p)\) tensor,out
will be an \((j \times k \times n \times p)\) tensor.
Examples
if (torch_is_installed()) {
# vector x vector
tensor1 = torch_randn(c(3))
tensor2 = torch_randn(c(3))
torch_matmul(tensor1, tensor2)
# matrix x vector
tensor1 = torch_randn(c(3, 4))
tensor2 = torch_randn(c(4))
torch_matmul(tensor1, tensor2)
# batched matrix x broadcasted vector
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(4))
torch_matmul(tensor1, tensor2)
# batched matrix x batched matrix
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(10, 4, 5))
torch_matmul(tensor1, tensor2)
# batched matrix x broadcasted matrix
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(4, 5))
torch_matmul(tensor1, tensor2)
}
#> torch_tensor
#> (1,.,.) =
#> 3.0716 0.7468 -0.8310 1.0082 0.2713
#> 0.9538 -2.6430 1.0720 1.4433 -1.5622
#> -2.3489 4.1144 3.9783 0.0552 -0.9403
#>
#> (2,.,.) =
#> 1.8761 -0.5561 -3.6179 -1.2761 0.2098
#> 8.8357 1.3633 -3.4486 2.8546 2.1681
#> 3.3908 1.7386 -1.9735 0.4729 1.2895
#>
#> (3,.,.) =
#> -1.2626 2.4527 -1.9928 -2.6927 0.1746
#> -2.7742 -5.0983 -4.1720 -3.1939 -0.3129
#> -0.5180 -1.1345 1.4375 1.0112 -0.0437
#>
#> (4,.,.) =
#> 2.2854 -5.7677 -4.3598 -0.7493 -1.4541
#> 2.2297 -0.1045 0.3557 1.1545 -1.0148
#> 2.8247 -3.7004 -1.2771 1.4504 -0.6019
#>
#> (5,.,.) =
#> -8.8498 2.2194 5.3933 -2.4148 -2.0289
#> -4.6353 2.2389 5.8302 0.8318 -0.2607
#> -2.2359 1.3161 3.3844 0.3085 -1.3568
#>
#> (6,.,.) =
#> -3.7949 -3.7769 -4.6787 -4.3660 -0.0204
#> -0.1510 -0.1350 0.9603 0.1589 -1.3366
#> -0.5151 -4.7478 -4.3111 -1.6742 1.0139
#>
#> ... [the output was truncated (use n=-1 to disable)]
#> [ CPUFloatType{10,3,5} ]