Matmul

## Usage

torch_matmul(self, other)

## Arguments

self

(Tensor) the first tensor to be multiplied

other

(Tensor) the second tensor to be multiplied

## Note

The 1-dimensional dot product version of this function does not support an out parameter.

## matmul(input, other, out=NULL) -> Tensor

Matrix product of two tensors.

The behavior depends on the dimensionality of the tensors as follows:

• If both tensors are 1-dimensional, the dot product (scalar) is returned.

• If both arguments are 2-dimensional, the matrix-matrix product is returned.

• If the first argument is 1-dimensional and the second argument is 2-dimensional, a 1 is prepended to its dimension for the purpose of the matrix multiply. After the matrix multiply, the prepended dimension is removed.

• If the first argument is 2-dimensional and the second argument is 1-dimensional, the matrix-vector product is returned.

• If both arguments are at least 1-dimensional and at least one argument is N-dimensional (where N > 2), then a batched matrix multiply is returned. If the first argument is 1-dimensional, a 1 is prepended to its dimension for the purpose of the batched matrix multiply and removed after. If the second argument is 1-dimensional, a 1 is appended to its dimension for the purpose of the batched matrix multiple and removed after. The non-matrix (i.e. batch) dimensions are broadcasted (and thus must be broadcastable). For example, if input is a $$(j \times 1 \times n \times m)$$ tensor and other is a $$(k \times m \times p)$$ tensor, out will be an $$(j \times k \times n \times p)$$ tensor.

## Examples

if (torch_is_installed()) {

# vector x vector
tensor1 = torch_randn(c(3))
tensor2 = torch_randn(c(3))
torch_matmul(tensor1, tensor2)
# matrix x vector
tensor1 = torch_randn(c(3, 4))
tensor2 = torch_randn(c(4))
torch_matmul(tensor1, tensor2)
# batched matrix x broadcasted vector
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(4))
torch_matmul(tensor1, tensor2)
# batched matrix x batched matrix
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(10, 4, 5))
torch_matmul(tensor1, tensor2)
# batched matrix x broadcasted matrix
tensor1 = torch_randn(c(10, 3, 4))
tensor2 = torch_randn(c(4, 5))
torch_matmul(tensor1, tensor2)
}
#> torch_tensor
#> (1,.,.) =
#>  -1.6344 -5.2313  0.2571  0.8802 -2.4961
#>  -3.6743 -3.0672 -0.5211  0.9903  2.1357
#>  -1.5367 -1.8580 -1.3778 -0.6833  5.5465
#>
#> (2,.,.) =
#>   1.9546  1.7773 -1.6320  5.4560 -2.5947
#>   2.7604 -1.6670 -0.8243  0.7185 -1.1604
#>  -4.8251 -1.4487  2.6527 -1.9351 -2.4288
#>
#> (3,.,.) =
#>   4.0709  1.4427 -2.2523  1.9684  1.7632
#>   1.6224 -7.0804 -2.2227  4.8965 -3.1157
#>   2.0483  2.7188 -0.6189  0.6330  0.4951
#>
#> (4,.,.) =
#>   0.2188  2.2214  0.5632 -0.2542 -0.7546
#>   0.4243 -3.6394  1.8634 -1.9575 -4.9420
#>  -0.4283 -1.9447 -1.7190  3.1359  1.0106
#>
#> (5,.,.) =
#>  -0.0445 -0.3136  0.8806 -1.1479 -1.3528
#>   1.3692 -0.6014 -1.9937  1.5184  2.9016
#>   0.7310  2.0486  0.4198 -1.7983  1.1015
#>
#> (6,.,.) =
#>  -1.3252 -2.0653  0.0881 -0.0067  0.0234
#>  -0.0771  0.1078  0.1332  0.6176 -1.0592
#>  -1.7423 -0.6105  2.1714 -2.7430 -2.0972
#>
#> ... [the output was truncated (use n=-1 to disable)]
#> [ CPUFloatType{10,3,5} ]