Matmul
Source:R/gen-namespace-docs.R
, R/gen-namespace-examples.R
, R/gen-namespace.R
torch_matmul.Rd
Matmul
matmul(input, other, out=NULL) -> Tensor
Matrix product of two tensors.
The behavior depends on the dimensionality of the tensors as follows:
If both tensors are 1-dimensional, the dot product (scalar) is returned.
If both arguments are 2-dimensional, the matrix-matrix product is returned.
If the first argument is 1-dimensional and the second argument is 2-dimensional, a 1 is prepended to its dimension for the purpose of the matrix multiply. After the matrix multiply, the prepended dimension is removed.
If the first argument is 2-dimensional and the second argument is 1-dimensional, the matrix-vector product is returned.
If both arguments are at least 1-dimensional and at least one argument is N-dimensional (where N > 2), then a batched matrix multiply is returned. If the first argument is 1-dimensional, a 1 is prepended to its dimension for the purpose of the batched matrix multiply and removed after. If the second argument is 1-dimensional, a 1 is appended to its dimension for the purpose of the batched matrix multiple and removed after. The non-matrix (i.e. batch) dimensions are broadcasted (and thus must be broadcastable). For example, if
input
is a tensor andother
is a tensor,out
will be an tensor.
Examples
if (torch_is_installed()) {
# vector x vector
tensor1 = torch_randn(c(3))
tensor2 = torch_randn(c(3))
torch_matmul(tensor1, tensor2)
# matrix x vector
tensor1 = torch_randn(c(3, 4))
tensor2 = torch_randn(c(4))
torch_matmul(tensor1, tensor2)
# batched matrix x broadcasted vector
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(4))
torch_matmul(tensor1, tensor2)
# batched matrix x batched matrix
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(10, 4, 5))
torch_matmul(tensor1, tensor2)
# batched matrix x broadcasted matrix
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(4, 5))
torch_matmul(tensor1, tensor2)
}
#> torch_tensor
#> (1,.,.) =
#> -0.2107 -0.3879 -1.3143 -0.0430 2.2345
#> -0.9136 -0.9068 1.8199 1.1698 -2.9135
#> 2.2450 2.6020 0.6387 1.1817 -2.4071
#>
#> (2,.,.) =
#> 0.2840 2.3692 -3.2117 -3.0466 6.9034
#> -1.2769 -1.0068 1.0793 0.3924 -1.1667
#> -0.8946 0.2128 0.1235 -0.5774 0.8398
#>
#> (3,.,.) =
#> 3.6449 -0.3683 -0.7667 -1.6467 -1.5919
#> -2.3211 0.3806 -0.7047 -2.5971 4.1467
#> -1.2207 -0.4390 -0.4703 -0.9046 1.9640
#>
#> (4,.,.) =
#> -3.2062 -1.1767 -0.8870 -1.5652 4.3275
#> -2.2964 -0.6939 0.7091 -0.4729 0.7282
#> -0.5145 0.3625 1.0261 -1.0104 -0.8082
#>
#> (5,.,.) =
#> 2.2827 2.1144 0.1544 -1.0026 -1.1655
#> 0.8138 2.1802 0.6393 0.8797 -1.2138
#> 2.5046 1.4670 -1.8273 -0.5193 1.5384
#>
#> (6,.,.) =
#> 0.0499 4.7422 -0.1814 -2.5135 2.8208
#> 0.9313 -1.1432 -0.8069 -0.4339 0.2122
#> -4.2823 -3.7941 2.3477 0.2831 -1.6490
#>
#> ... [the output was truncated (use n=-1 to disable)]
#> [ CPUFloatType{10,3,5} ]