Conv_transpose1d
Source:R/gen-namespace-docs.R
, R/gen-namespace-examples.R
, R/gen-namespace.R
torch_conv_transpose1d.Rd
Conv_transpose1d
Usage
torch_conv_transpose1d(
input,
weight,
bias = list(),
stride = 1L,
padding = 0L,
output_padding = 0L,
groups = 1L,
dilation = 1L
)
Arguments
- input
input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iW)\)
- weight
filters of shape \((\mbox{in\_channels} , \frac{\mbox{out\_channels}}{\mbox{groups}} , kW)\)
- bias
optional bias of shape \((\mbox{out\_channels})\). Default: NULL
- stride
the stride of the convolving kernel. Can be a single number or a tuple
(sW,)
. Default: 1- padding
dilation * (kernel_size - 1) - padding
zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple(padW,)
. Default: 0- output_padding
additional size added to one side of each dimension in the output shape. Can be a single number or a tuple
(out_padW)
. Default: 0- groups
split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1
- dilation
the spacing between kernel elements. Can be a single number or a tuple
(dW,)
. Default: 1
conv_transpose1d(input, weight, bias=NULL, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor
Applies a 1D transposed convolution operator over an input signal composed of several input planes, sometimes also called "deconvolution".
See nn_conv_transpose1d()
for details and output shape.
Examples
if (torch_is_installed()) {
inputs = torch_randn(c(20, 16, 50))
weights = torch_randn(c(16, 33, 5))
nnf_conv_transpose1d(inputs, weights)
}
#> torch_tensor
#> (1,.,.) =
#> Columns 1 to 8 3.1133 -4.7723 -2.3931 -5.4539 -7.1972 12.9957 13.4438 7.1489
#> -0.4373 3.2845 6.6364 20.3117 -11.1368 4.7531 -5.1642 8.9951
#> -0.3449 1.5255 -4.1246 -6.8116 -3.3254 1.6915 12.5595 1.8838
#> -1.6549 -7.0034 6.1678 13.2144 -12.1304 -3.7036 -12.1988 2.7846
#> 6.4817 -11.0751 2.1542 -1.9715 -13.8498 -0.3708 26.4643 3.0109
#> -0.7963 0.3241 5.2954 15.8373 6.9060 -4.3283 -6.1664 -8.7904
#> -2.3767 -2.0517 -13.4455 -1.5481 12.5281 -9.5647 -1.5644 -10.5425
#> -2.4293 11.8508 -9.4020 -3.8714 13.9610 -10.0671 2.5673 6.9445
#> -1.1489 -10.3609 5.3897 -8.2850 13.4199 1.4482 11.7322 -0.2206
#> -0.2007 0.1689 -14.4817 15.5031 16.2238 -6.9368 -8.7215 1.2905
#> 1.9709 -3.1902 9.5467 3.2830 6.2044 -13.3291 -11.4545 6.1785
#> -1.6569 5.3462 -3.9712 5.0305 22.8593 -11.9518 -11.9755 15.4126
#> -2.6268 0.6996 -3.9360 -3.9430 6.7054 0.9552 -4.0315 -0.7178
#> 5.7376 -5.0806 6.1834 -11.8485 -3.9999 5.9285 -0.9666 -0.0510
#> 0.6164 -0.2205 -15.5796 -4.0639 -5.5544 23.7233 -11.5765 -3.8040
#> 3.2813 -1.4552 3.8877 1.0882 3.8026 -6.8719 4.6076 6.2347
#> -0.7848 0.6196 6.7321 -3.9167 -15.1159 18.6778 -0.4508 -2.6980
#> -1.1497 3.4999 -12.8211 16.7226 5.8928 -2.0356 -4.0186 -18.9285
#> -3.2344 -0.7106 -7.1196 15.6061 14.2163 -8.7526 2.4252 6.2051
#> 0.3937 -1.5599 -2.8903 -0.0370 2.4302 -5.0103 -6.9539 11.7354
#> -4.9761 3.1591 2.8487 -4.1168 -12.5216 -19.1421 -0.8837 -1.6499
#> -4.5019 -7.6711 2.8350 1.2834 0.8746 -5.2829 23.4167 6.5596
#> -1.9298 -0.7282 8.6003 -13.8451 13.8121 11.3894 -8.6946 -16.1076
#> 1.4646 7.7329 4.4562 1.8917 12.8766 11.4385 7.3228 -20.4873
#> -0.5677 4.7500 -4.8113 -8.5771 -10.4123 -7.8519 8.4947 0.4458
#> -3.0359 7.6765 -1.5402 -15.9809 -9.8216 26.8429 -5.7206 -11.2942
#> 3.0253 -2.2538 -18.2349 18.0669 10.1747 -1.4759 2.4461 -7.1174
#> -0.3862 -8.1428 4.1358 -2.3128 -0.6656 -0.7421 18.4221 5.4598
#> 3.0224 -6.4545 4.6795 -7.8754 -0.2590 6.4324 -13.1886 -3.9383
#> ... [the output was truncated (use n=-1 to disable)]
#> [ CPUFloatType{20,33,54} ]