Matmul

## Usage

torch_matmul(self, other)

## Arguments

self

(Tensor) the first tensor to be multiplied

other

(Tensor) the second tensor to be multiplied

## Note

The 1-dimensional dot product version of this function does not support an out parameter.

## matmul(input, other, out=NULL) -> Tensor

Matrix product of two tensors.

The behavior depends on the dimensionality of the tensors as follows:

• If both tensors are 1-dimensional, the dot product (scalar) is returned.

• If both arguments are 2-dimensional, the matrix-matrix product is returned.

• If the first argument is 1-dimensional and the second argument is 2-dimensional, a 1 is prepended to its dimension for the purpose of the matrix multiply. After the matrix multiply, the prepended dimension is removed.

• If the first argument is 2-dimensional and the second argument is 1-dimensional, the matrix-vector product is returned.

• If both arguments are at least 1-dimensional and at least one argument is N-dimensional (where N > 2), then a batched matrix multiply is returned. If the first argument is 1-dimensional, a 1 is prepended to its dimension for the purpose of the batched matrix multiply and removed after. If the second argument is 1-dimensional, a 1 is appended to its dimension for the purpose of the batched matrix multiple and removed after. The non-matrix (i.e. batch) dimensions are broadcasted (and thus must be broadcastable). For example, if input is a $$(j \times 1 \times n \times m)$$ tensor and other is a $$(k \times m \times p)$$ tensor, out will be an $$(j \times k \times n \times p)$$ tensor.

## Examples

if (torch_is_installed()) {

# vector x vector
tensor1 = torch_randn(c(3))
tensor2 = torch_randn(c(3))
torch_matmul(tensor1, tensor2)
# matrix x vector
tensor1 = torch_randn(c(3, 4))
tensor2 = torch_randn(c(4))
torch_matmul(tensor1, tensor2)
# batched matrix x broadcasted vector
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(4))
torch_matmul(tensor1, tensor2)
# batched matrix x batched matrix
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(10, 4, 5))
torch_matmul(tensor1, tensor2)
# batched matrix x broadcasted matrix
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(4, 5))
torch_matmul(tensor1, tensor2)
}
#> torch_tensor
#> (1,.,.) =
#>  -0.6703  0.1748  0.1019  2.1782 -1.9164
#>   3.4863  0.7423 -0.8082  1.5387 -0.2886
#>   2.2476  0.9486  1.8962 -1.2999  1.4790
#>
#> (2,.,.) =
#>   0.9446  1.0571  1.4544 -0.2813  0.9670
#>   1.4007 -0.4533  0.5070 -0.2281 -0.8250
#>  -0.2017  0.3509  0.6494  0.0332  0.1461
#>
#> (3,.,.) =
#>  -3.4653 -1.6867 -0.8829 -0.4846 -1.2331
#>  -0.6890  0.2529  1.7228 -0.7841  0.2751
#>  -4.6621  0.6449  0.9571  2.8237 -2.0121
#>
#> (4,.,.) =
#>  -0.3645 -0.8249  0.9828 -2.5264  0.8061
#>  -5.7000 -0.6100 -0.2068  0.9414 -1.2617
#>   1.9504  1.7264  2.3723 -1.4703  2.5558
#>
#> (5,.,.) =
#>  -0.3427  0.8398  1.4759 -0.6493  1.1072
#>   2.5539 -1.2103 -1.4570 -0.5723 -0.6212
#>   2.2649 -0.9920 -2.4079  0.2779 -0.6104
#>
#> (6,.,.) =
#>  -1.8022  0.1544 -0.5616  0.9548 -0.2760
#>  -0.6957 -2.7108 -2.1812 -1.5212 -1.2216
#>   2.8840 -0.3510 -1.7105 -1.5380  1.7126
#>
#> ... [the output was truncated (use n=-1 to disable)]
#> [ CPUFloatType{10,3,5} ]