Matmul
Source:R/gen-namespace-docs.R
, R/gen-namespace-examples.R
, R/gen-namespace.R
torch_matmul.Rd
Matmul
matmul(input, other, out=NULL) -> Tensor
Matrix product of two tensors.
The behavior depends on the dimensionality of the tensors as follows:
If both tensors are 1-dimensional, the dot product (scalar) is returned.
If both arguments are 2-dimensional, the matrix-matrix product is returned.
If the first argument is 1-dimensional and the second argument is 2-dimensional, a 1 is prepended to its dimension for the purpose of the matrix multiply. After the matrix multiply, the prepended dimension is removed.
If the first argument is 2-dimensional and the second argument is 1-dimensional, the matrix-vector product is returned.
If both arguments are at least 1-dimensional and at least one argument is N-dimensional (where N > 2), then a batched matrix multiply is returned. If the first argument is 1-dimensional, a 1 is prepended to its dimension for the purpose of the batched matrix multiply and removed after. If the second argument is 1-dimensional, a 1 is appended to its dimension for the purpose of the batched matrix multiple and removed after. The non-matrix (i.e. batch) dimensions are broadcasted (and thus must be broadcastable). For example, if
input
is a tensor andother
is a tensor,out
will be an tensor.
Examples
if (torch_is_installed()) {
# vector x vector
tensor1 = torch_randn(c(3))
tensor2 = torch_randn(c(3))
torch_matmul(tensor1, tensor2)
# matrix x vector
tensor1 = torch_randn(c(3, 4))
tensor2 = torch_randn(c(4))
torch_matmul(tensor1, tensor2)
# batched matrix x broadcasted vector
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(4))
torch_matmul(tensor1, tensor2)
# batched matrix x batched matrix
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(10, 4, 5))
torch_matmul(tensor1, tensor2)
# batched matrix x broadcasted matrix
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(4, 5))
torch_matmul(tensor1, tensor2)
}
#> torch_tensor
#> (1,.,.) =
#> -1.1164 0.3940 0.3484 2.0695 -0.5228
#> -0.9559 -0.0983 -2.1750 -2.1280 0.8184
#> 1.3234 -0.3771 2.3310 1.7787 -1.4039
#>
#> (2,.,.) =
#> -0.8618 -1.0597 -0.5731 0.1435 -1.0502
#> 0.0843 0.9535 -0.4668 -0.6751 1.5336
#> -0.8721 -0.1149 -2.3503 -2.6045 0.5173
#>
#> (3,.,.) =
#> 0.2155 2.2246 1.5020 2.4600 1.9676
#> -0.9840 -0.7834 -1.9952 -1.9891 -0.4647
#> 1.6203 -0.7079 1.9915 0.8991 -0.9044
#>
#> (4,.,.) =
#> -0.1419 -4.2189 -0.8079 -1.6711 -4.0958
#> 0.4448 -0.1515 -0.5459 -1.5106 0.2232
#> 2.0520 3.8454 1.1733 -0.3855 4.3862
#>
#> (5,.,.) =
#> -1.8747 -0.4473 -0.7882 1.1635 -1.2258
#> 0.6930 4.2601 0.1318 -0.1396 4.7588
#> 0.7601 0.1396 -0.2970 -1.5069 0.5214
#>
#> (6,.,.) =
#> 2.3254 5.4225 3.7450 3.5538 4.8803
#> 2.0565 0.0569 0.6607 -1.7196 0.6934
#> 0.7582 -0.2150 1.2865 1.0094 -0.4370
#>
#> ... [the output was truncated (use n=-1 to disable)]
#> [ CPUFloatType{10,3,5} ]