Skip to contents

Matmul

Usage

torch_matmul(self, other)

Arguments

self

(Tensor) the first tensor to be multiplied

other

(Tensor) the second tensor to be multiplied

Note

The 1-dimensional dot product version of this function does not support an `out` parameter.

matmul(input, other, out=NULL) -> Tensor

Matrix product of two tensors.

The behavior depends on the dimensionality of the tensors as follows:

  • If both tensors are 1-dimensional, the dot product (scalar) is returned.

  • If both arguments are 2-dimensional, the matrix-matrix product is returned.

  • If the first argument is 1-dimensional and the second argument is 2-dimensional, a 1 is prepended to its dimension for the purpose of the matrix multiply. After the matrix multiply, the prepended dimension is removed.

  • If the first argument is 2-dimensional and the second argument is 1-dimensional, the matrix-vector product is returned.

  • If both arguments are at least 1-dimensional and at least one argument is N-dimensional (where N > 2), then a batched matrix multiply is returned. If the first argument is 1-dimensional, a 1 is prepended to its dimension for the purpose of the batched matrix multiply and removed after. If the second argument is 1-dimensional, a 1 is appended to its dimension for the purpose of the batched matrix multiple and removed after. The non-matrix (i.e. batch) dimensions are broadcasted (and thus must be broadcastable). For example, if input is a \((j \times 1 \times n \times m)\) tensor and other is a \((k \times m \times p)\) tensor, out will be an \((j \times k \times n \times p)\) tensor.

Examples

if (torch_is_installed()) {

# vector x vector
tensor1 = torch_randn(c(3))
tensor2 = torch_randn(c(3))
torch_matmul(tensor1, tensor2)
# matrix x vector
tensor1 = torch_randn(c(3, 4))
tensor2 = torch_randn(c(4))
torch_matmul(tensor1, tensor2)
# batched matrix x broadcasted vector
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(4))
torch_matmul(tensor1, tensor2)
# batched matrix x batched matrix
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(10, 4, 5))
torch_matmul(tensor1, tensor2)
# batched matrix x broadcasted matrix
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(4, 5))
torch_matmul(tensor1, tensor2)
}
#> torch_tensor
#> (1,.,.) = 
#>  -1.8151  0.5780 -0.1290 -1.8930  0.5097
#>  -0.8777 -0.6976 -0.2187  1.4253  0.2219
#>  -0.6841  1.3040 -1.1039 -0.4555  0.3386
#> 
#> (2,.,.) = 
#>   1.7396 -0.4245  0.8057  0.1974 -0.3215
#>   2.0949  0.4632  2.0046 -4.0647 -0.3736
#>  -0.1693  0.1147  0.6495 -0.2027  1.5907
#> 
#> (3,.,.) = 
#>   2.1441  0.4117  0.2571  0.3958  0.0381
#>   0.5040  0.8120  0.5435 -1.2966  1.1500
#>  -1.9426 -0.2980 -0.7680 -0.6991 -1.3861
#> 
#> (4,.,.) = 
#>   4.5067 -0.2358  3.4780 -2.1069  1.3877
#>  -0.2883  0.2882 -0.8061  2.6827  1.6880
#>   1.0297  0.5074  0.9047 -2.5976 -0.3816
#> 
#> (5,.,.) = 
#>  -0.0459  1.9817 -0.8511 -2.3165 -0.0819
#>   0.3907  0.7466  0.2695 -1.2724  0.5345
#>  -4.6231 -0.0114 -3.3173  2.4813 -1.0749
#> 
#> (6,.,.) = 
#>   2.9665  0.7322  1.6268 -1.3085  1.3395
#>   0.6895 -2.1424  1.1066  1.8214 -0.6949
#>   2.0319 -0.4956  1.9697 -0.9556  0.7094
#> 
#> ... [the output was truncated (use n=-1 to disable)]
#> [ CPUFloatType{10,3,5} ]