`R/gen-namespace-docs.R`

, `R/gen-namespace-examples.R`

, `R/gen-namespace.R`

`torch_matmul.Rd`

Matmul

torch_matmul(self, other)

self | (Tensor) the first tensor to be multiplied |
---|---|

other | (Tensor) the second tensor to be multiplied |

The 1-dimensional dot product version of this function does not support an `out` parameter.

Matrix product of two tensors.

The behavior depends on the dimensionality of the tensors as follows:

If both tensors are 1-dimensional, the dot product (scalar) is returned.

If both arguments are 2-dimensional, the matrix-matrix product is returned.

If the first argument is 1-dimensional and the second argument is 2-dimensional, a 1 is prepended to its dimension for the purpose of the matrix multiply. After the matrix multiply, the prepended dimension is removed.

If the first argument is 2-dimensional and the second argument is 1-dimensional, the matrix-vector product is returned.

If both arguments are at least 1-dimensional and at least one argument is N-dimensional (where N > 2), then a batched matrix multiply is returned. If the first argument is 1-dimensional, a 1 is prepended to its dimension for the purpose of the batched matrix multiply and removed after. If the second argument is 1-dimensional, a 1 is appended to its dimension for the purpose of the batched matrix multiple and removed after. The non-matrix (i.e. batch) dimensions are broadcasted (and thus must be broadcastable). For example, if

`input`

is a \((j \times 1 \times n \times m)\) tensor and`other`

is a \((k \times m \times p)\) tensor,`out`

will be an \((j \times k \times n \times p)\) tensor.

if (torch_is_installed()) { # vector x vector tensor1 = torch_randn(c(3)) tensor2 = torch_randn(c(3)) torch_matmul(tensor1, tensor2) # matrix x vector tensor1 = torch_randn(c(3, 4)) tensor2 = torch_randn(c(4)) torch_matmul(tensor1, tensor2) # batched matrix x broadcasted vector tensor1 = torch_randn(c(10, 3, 4)) tensor2 = torch_randn(c(4)) torch_matmul(tensor1, tensor2) # batched matrix x batched matrix tensor1 = torch_randn(c(10, 3, 4)) tensor2 = torch_randn(c(10, 4, 5)) torch_matmul(tensor1, tensor2) # batched matrix x broadcasted matrix tensor1 = torch_randn(c(10, 3, 4)) tensor2 = torch_randn(c(4, 5)) torch_matmul(tensor1, tensor2) }#> torch_tensor #> (1,.,.) = #> -3.7098 -1.1172 -1.6686 -0.4494 2.9302 #> -0.8941 -0.1356 -1.1760 0.6844 -1.3888 #> -0.5082 0.2904 -0.1575 1.3850 -1.1415 #> #> (2,.,.) = #> -6.8313 -1.5690 -3.7260 1.4323 1.6079 #> -3.5725 -1.0807 -1.9969 1.0123 -0.1947 #> -0.9381 -1.1668 0.4067 -0.1116 0.0333 #> #> (3,.,.) = #> -2.6460 -1.4916 0.1229 1.4380 -0.6787 #> -1.9387 -0.1548 -1.3375 -0.4379 2.1450 #> -0.9384 0.0436 -0.3290 0.3641 0.6767 #> #> (4,.,.) = #> 2.5517 0.0664 2.7634 -0.4600 -0.0212 #> -0.5083 -0.1571 0.1162 0.1263 0.5050 #> -1.5062 -0.1190 -1.2352 1.0990 -1.0813 #> #> (5,.,.) = #> 5.8560 0.3155 4.2695 -2.4947 0.1443 #> 2.2106 0.3440 0.0966 -2.1345 0.6504 #> -1.4630 0.3457 -1.3036 1.4699 -0.9652 #> #> (6,.,.) = #> 0.4144 0.0035 2.0123 0.6590 0.6923 #> 0.0580 -0.9808 0.6128 -0.5030 -0.3969 #> 0.3492 -0.2287 0.4878 -1.1594 1.5697 #> #> ... [the output was truncated (use n=-1 to disable)] #> [ CPUFloatType{10,3,5} ]