Skip to contents

Conv_transpose1d

Usage

torch_conv_transpose1d(
  input,
  weight,
  bias = list(),
  stride = 1L,
  padding = 0L,
  output_padding = 0L,
  groups = 1L,
  dilation = 1L
)

Arguments

input

input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iW)\)

weight

filters of shape \((\mbox{in\_channels} , \frac{\mbox{out\_channels}}{\mbox{groups}} , kW)\)

bias

optional bias of shape \((\mbox{out\_channels})\). Default: NULL

stride

the stride of the convolving kernel. Can be a single number or a tuple (sW,). Default: 1

padding

dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padW,). Default: 0

output_padding

additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padW). Default: 0

groups

split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

dilation

the spacing between kernel elements. Can be a single number or a tuple (dW,). Default: 1

conv_transpose1d(input, weight, bias=NULL, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor

Applies a 1D transposed convolution operator over an input signal composed of several input planes, sometimes also called "deconvolution".

See nn_conv_transpose1d() for details and output shape.

Examples

if (torch_is_installed()) {

inputs = torch_randn(c(20, 16, 50))
weights = torch_randn(c(16, 33, 5))
nnf_conv_transpose1d(inputs, weights)
}
#> torch_tensor
#> (1,.,.) = 
#>  Columns 1 to 8  -4.1502   5.2662   1.2746  -3.9347   2.2134   7.8490  -1.4314   5.7363
#>   -5.8787  -3.6590  -5.1717  -6.2001  12.6397  13.7569   6.9527   0.8596
#>   -1.2104   0.7274   0.4443  -5.9108   7.5095  -6.3658  -6.4068  13.9326
#>   -5.9152  -6.9704   7.5327   3.7642  -1.7677  -5.2399  15.3008  -8.7244
#>    0.4521  -6.3098  -9.1746   6.6661  14.1740  15.3086  -8.1402   0.9600
#>    2.2716   0.8837  -0.1830  -4.7564  14.5371  14.8455  -4.3501  -9.0601
#>   -1.5410  11.1980   9.0446  -3.0310  -1.8131  15.7361 -12.3093  16.3265
#>   -3.2634   8.4169 -10.9854  -3.6416   5.5400   3.7850   5.8189  -1.2514
#>    1.1178  11.4405  -1.1431  -8.9672  -6.1387  14.0108  -1.0153   5.5063
#>    2.8625   4.9656   6.2678  -7.2279  -0.8833   5.9117   9.8635   0.9836
#>    2.9008   9.2096  -5.7410   4.0162   1.7477   0.0787  -0.9875  14.4767
#>    7.4777  -2.6968   7.8789  10.3786  -0.9959  15.2169  12.7871   1.1561
#>   -6.8383   8.0303   3.1514  11.9488  -1.3620  -9.9862   0.0746  -4.0376
#>   -2.2166   1.9659  -5.3781 -14.7103  -3.3162   4.7723   2.4020  -8.3331
#>    7.2071   7.2636  12.6062  -0.7760   9.4554  -7.9184 -11.9570   5.5830
#>    2.1695   6.4469   7.0870  -0.3204   2.3094  -6.7242  -1.6000   3.1596
#>    3.3520   4.1517  -2.1789   4.5343   2.5825   2.1492   1.1830  -4.4321
#>    0.0922  -1.3383  -4.0789   7.5349   0.3219   5.2433  -2.6360   3.7869
#>    4.7933   3.3512  -4.5051  -2.1901  22.3008  18.8490   7.5580  -5.6750
#>    7.3938 -11.1678  -2.0972  -5.7141 -11.4733  -7.5001   3.0933  -9.5548
#>   -9.2641 -12.2349   1.5176  -5.4936   4.9133  -1.5024 -10.7774 -11.2684
#>    3.2107   1.9521  11.6881  10.0457  12.0771   5.5904   2.8542 -15.3766
#>    3.3805  -0.6111  -3.0381   4.6090  -6.2199   7.1722  -3.4843  -3.2336
#>   -3.0108   6.0233   4.3482 -10.2952  -0.5250   1.0059   6.7320  12.2936
#>    6.2232   0.8173   2.2507   4.4085   6.0984 -12.9619  -7.8572   1.1984
#>    6.3426 -11.2770   4.4595  -5.8399  -1.1147  -5.2786 -11.4394  -7.1858
#>    4.6621  -2.3163   4.1256  -9.2621   0.6303   0.6374  11.0537   8.0226
#>   -4.4188   2.5024   7.9690 -10.1976  -2.7284  -1.7297 -14.5485  -1.5143
#>   -3.2679  -4.1840   6.4311   6.7153  11.8356   8.8752   0.9989   1.3909
#> ... [the output was truncated (use n=-1 to disable)]
#> [ CPUFloatType{20,33,54} ]