Matmul
Source:R/gen-namespace-docs.R
, R/gen-namespace-examples.R
, R/gen-namespace.R
torch_matmul.Rd
Matmul
matmul(input, other, out=NULL) -> Tensor
Matrix product of two tensors.
The behavior depends on the dimensionality of the tensors as follows:
If both tensors are 1-dimensional, the dot product (scalar) is returned.
If both arguments are 2-dimensional, the matrix-matrix product is returned.
If the first argument is 1-dimensional and the second argument is 2-dimensional, a 1 is prepended to its dimension for the purpose of the matrix multiply. After the matrix multiply, the prepended dimension is removed.
If the first argument is 2-dimensional and the second argument is 1-dimensional, the matrix-vector product is returned.
If both arguments are at least 1-dimensional and at least one argument is N-dimensional (where N > 2), then a batched matrix multiply is returned. If the first argument is 1-dimensional, a 1 is prepended to its dimension for the purpose of the batched matrix multiply and removed after. If the second argument is 1-dimensional, a 1 is appended to its dimension for the purpose of the batched matrix multiple and removed after. The non-matrix (i.e. batch) dimensions are broadcasted (and thus must be broadcastable). For example, if
input
is a \((j \times 1 \times n \times m)\) tensor andother
is a \((k \times m \times p)\) tensor,out
will be an \((j \times k \times n \times p)\) tensor.
Examples
if (torch_is_installed()) {
# vector x vector
tensor1 = torch_randn(c(3))
tensor2 = torch_randn(c(3))
torch_matmul(tensor1, tensor2)
# matrix x vector
tensor1 = torch_randn(c(3, 4))
tensor2 = torch_randn(c(4))
torch_matmul(tensor1, tensor2)
# batched matrix x broadcasted vector
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(4))
torch_matmul(tensor1, tensor2)
# batched matrix x batched matrix
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(10, 4, 5))
torch_matmul(tensor1, tensor2)
# batched matrix x broadcasted matrix
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(4, 5))
torch_matmul(tensor1, tensor2)
}
#> torch_tensor
#> (1,.,.) =
#> -1.2249 -0.1648 -2.3938 1.7560 0.9053
#> -1.0419 2.0118 -4.2912 7.2212 -0.7407
#> 0.1272 -1.2478 2.6422 -4.4955 -0.0153
#>
#> (2,.,.) =
#> 0.9233 0.7070 0.9890 -0.2460 -1.9709
#> 0.6177 -0.4424 -0.4492 -1.5433 -0.3341
#> 1.3711 0.5767 1.3911 -1.2805 -2.4189
#>
#> (3,.,.) =
#> 1.7171 0.4976 -0.0939 -0.3333 -1.2506
#> -0.7838 -0.4589 0.4021 0.2197 1.8895
#> 0.3367 -0.5325 0.8675 -1.8757 0.1315
#>
#> (4,.,.) =
#> 1.6675 0.2451 0.0737 -1.4478 -1.8759
#> 1.4068 0.0628 0.3464 -1.7983 -1.7018
#> -1.0183 -1.2610 1.5015 -2.8130 1.0033
#>
#> (5,.,.) =
#> 2.3063 0.3986 -1.8893 -0.5622 -1.8505
#> -0.2171 0.5405 -2.6125 2.8565 0.3645
#> 0.5116 0.4476 1.1008 -0.4422 -1.4571
#>
#> (6,.,.) =
#> 1.2088 1.6089 -1.1047 2.7038 -2.2338
#> 0.3484 0.5903 0.7115 0.2711 -1.3215
#> 1.8408 1.9322 -3.3484 3.5272 -3.2111
#>
#> ... [the output was truncated (use n=-1 to disable)]
#> [ CPUFloatType{10,3,5} ]