Matmul
Source:R/gen-namespace-docs.R
, R/gen-namespace-examples.R
, R/gen-namespace.R
torch_matmul.Rd
Matmul
matmul(input, other, out=NULL) -> Tensor
Matrix product of two tensors.
The behavior depends on the dimensionality of the tensors as follows:
If both tensors are 1-dimensional, the dot product (scalar) is returned.
If both arguments are 2-dimensional, the matrix-matrix product is returned.
If the first argument is 1-dimensional and the second argument is 2-dimensional, a 1 is prepended to its dimension for the purpose of the matrix multiply. After the matrix multiply, the prepended dimension is removed.
If the first argument is 2-dimensional and the second argument is 1-dimensional, the matrix-vector product is returned.
If both arguments are at least 1-dimensional and at least one argument is N-dimensional (where N > 2), then a batched matrix multiply is returned. If the first argument is 1-dimensional, a 1 is prepended to its dimension for the purpose of the batched matrix multiply and removed after. If the second argument is 1-dimensional, a 1 is appended to its dimension for the purpose of the batched matrix multiple and removed after. The non-matrix (i.e. batch) dimensions are broadcasted (and thus must be broadcastable). For example, if
input
is a \((j \times 1 \times n \times m)\) tensor andother
is a \((k \times m \times p)\) tensor,out
will be an \((j \times k \times n \times p)\) tensor.
Examples
if (torch_is_installed()) {
# vector x vector
tensor1 = torch_randn(c(3))
tensor2 = torch_randn(c(3))
torch_matmul(tensor1, tensor2)
# matrix x vector
tensor1 = torch_randn(c(3, 4))
tensor2 = torch_randn(c(4))
torch_matmul(tensor1, tensor2)
# batched matrix x broadcasted vector
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(4))
torch_matmul(tensor1, tensor2)
# batched matrix x batched matrix
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(10, 4, 5))
torch_matmul(tensor1, tensor2)
# batched matrix x broadcasted matrix
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(4, 5))
torch_matmul(tensor1, tensor2)
}
#> torch_tensor
#> (1,.,.) =
#> 0.4953 -0.0931 -0.6012 0.3149 0.4956
#> -0.8852 -0.8143 2.5032 1.5431 3.9407
#> 0.2427 0.4619 -1.6010 -0.1447 1.1771
#>
#> (2,.,.) =
#> 2.1702 2.7736 -8.8175 -3.2974 -4.1093
#> -0.1470 1.8487 -4.4137 -1.2524 2.5574
#> 0.3700 -0.5519 0.7760 0.8085 -0.8392
#>
#> (3,.,.) =
#> 1.0952 0.8036 -3.0625 -0.9351 -2.5530
#> -1.9992 -2.0946 7.0350 2.6163 4.0652
#> -0.7806 1.0579 -1.0802 -1.3288 -1.2548
#>
#> (4,.,.) =
#> -1.0307 0.4309 0.1326 -0.0340 2.4011
#> 1.5600 0.3610 -2.4803 -0.5294 -4.5023
#> 0.5848 1.4312 -4.2854 -1.3027 1.4004
#>
#> (5,.,.) =
#> 0.0092 0.4273 -0.4270 -1.1245 -3.4985
#> -0.2728 0.1605 0.5047 -0.8748 -2.8190
#> -1.0135 0.3970 0.5684 -0.5930 0.1720
#>
#> (6,.,.) =
#> 1.1472 -0.4599 -0.6486 0.4600 0.3944
#> -2.1068 -0.8513 4.0317 1.7759 6.1749
#> -0.7362 -1.1713 2.6907 2.3224 6.6571
#>
#> ... [the output was truncated (use n=-1 to disable)]
#> [ CPUFloatType{10,3,5} ]