Matmul
Source:R/gen-namespace-docs.R
, R/gen-namespace-examples.R
, R/gen-namespace.R
torch_matmul.Rd
Matmul
matmul(input, other, out=NULL) -> Tensor
Matrix product of two tensors.
The behavior depends on the dimensionality of the tensors as follows:
If both tensors are 1-dimensional, the dot product (scalar) is returned.
If both arguments are 2-dimensional, the matrix-matrix product is returned.
If the first argument is 1-dimensional and the second argument is 2-dimensional, a 1 is prepended to its dimension for the purpose of the matrix multiply. After the matrix multiply, the prepended dimension is removed.
If the first argument is 2-dimensional and the second argument is 1-dimensional, the matrix-vector product is returned.
If both arguments are at least 1-dimensional and at least one argument is N-dimensional (where N > 2), then a batched matrix multiply is returned. If the first argument is 1-dimensional, a 1 is prepended to its dimension for the purpose of the batched matrix multiply and removed after. If the second argument is 1-dimensional, a 1 is appended to its dimension for the purpose of the batched matrix multiple and removed after. The non-matrix (i.e. batch) dimensions are broadcasted (and thus must be broadcastable). For example, if
input
is a \((j \times 1 \times n \times m)\) tensor andother
is a \((k \times m \times p)\) tensor,out
will be an \((j \times k \times n \times p)\) tensor.
Examples
if (torch_is_installed()) {
# vector x vector
tensor1 = torch_randn(c(3))
tensor2 = torch_randn(c(3))
torch_matmul(tensor1, tensor2)
# matrix x vector
tensor1 = torch_randn(c(3, 4))
tensor2 = torch_randn(c(4))
torch_matmul(tensor1, tensor2)
# batched matrix x broadcasted vector
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(4))
torch_matmul(tensor1, tensor2)
# batched matrix x batched matrix
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(10, 4, 5))
torch_matmul(tensor1, tensor2)
# batched matrix x broadcasted matrix
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(4, 5))
torch_matmul(tensor1, tensor2)
}
#> torch_tensor
#> (1,.,.) =
#> 0.7460 0.6429 -2.2981 1.1007 -0.8882
#> 0.4028 0.2293 -2.2601 -1.0417 0.2696
#> 0.4428 1.0245 1.0811 0.6305 -2.3190
#>
#> (2,.,.) =
#> 0.4344 0.4525 0.0217 1.2605 -1.5103
#> 0.3009 0.1593 0.0042 -0.3794 -1.0311
#> 0.2879 0.5997 0.9439 1.2910 -1.6845
#>
#> (3,.,.) =
#> -0.1025 0.0202 -1.0318 -1.0637 1.1033
#> 0.8092 0.4052 -0.1798 -0.2162 -2.6361
#> -0.8710 0.5978 3.6105 2.5247 0.3231
#>
#> (4,.,.) =
#> -0.1914 0.6312 -2.2367 -1.8273 2.2736
#> -0.7006 0.9832 -0.5136 1.3260 2.7392
#> -0.3681 0.0248 -0.1657 -1.3316 1.3797
#>
#> (5,.,.) =
#> -0.0805 -0.9444 0.5724 -2.5830 -0.1149
#> 0.5353 -1.5956 -2.3768 -2.0726 -0.0577
#> -0.2912 0.7820 5.8870 2.9461 -3.3156
#>
#> (6,.,.) =
#> -0.0365 0.1440 1.9602 1.0294 -1.3080
#> 0.5767 0.3401 1.7709 2.8735 -3.2712
#> 0.2113 0.6800 -1.3332 1.2957 0.2318
#>
#> ... [the output was truncated (use n=-1 to disable)]
#> [ CPUFloatType{10,3,5} ]