Skip to contents

Conv_transpose1d

Usage

torch_conv_transpose1d(
  input,
  weight,
  bias = list(),
  stride = 1L,
  padding = 0L,
  output_padding = 0L,
  groups = 1L,
  dilation = 1L
)

Arguments

input

input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iW)\)

weight

filters of shape \((\mbox{in\_channels} , \frac{\mbox{out\_channels}}{\mbox{groups}} , kW)\)

bias

optional bias of shape \((\mbox{out\_channels})\). Default: NULL

stride

the stride of the convolving kernel. Can be a single number or a tuple (sW,). Default: 1

padding

dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padW,). Default: 0

output_padding

additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padW). Default: 0

groups

split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

dilation

the spacing between kernel elements. Can be a single number or a tuple (dW,). Default: 1

conv_transpose1d(input, weight, bias=NULL, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor

Applies a 1D transposed convolution operator over an input signal composed of several input planes, sometimes also called "deconvolution".

See nn_conv_transpose1d() for details and output shape.

Examples

if (torch_is_installed()) {

inputs = torch_randn(c(20, 16, 50))
weights = torch_randn(c(16, 33, 5))
nnf_conv_transpose1d(inputs, weights)
}
#> torch_tensor
#> (1,.,.) = 
#>  Columns 1 to 8  -3.1930  -4.2702  -9.2833  -9.8059 -16.5826   5.8744 -18.6023   4.3550
#>    5.0406  -1.3268  -2.5253  -1.6843  18.2028   1.6143  -6.7838  -4.8355
#>   -0.7754   4.1768   4.0432  11.5067 -22.8409  -4.3929  -1.2036   6.1182
#>    1.9391   7.1346   9.5663  -3.8963   2.3459  13.0690   4.4142 -19.7730
#>   -1.2764   1.7431   3.5372  -5.1710  -0.2980  -5.2598  -7.5026  -9.0067
#>   -3.0922  -8.5973  10.8830   8.8740   3.5031  -1.4500  12.0707   4.9385
#>    3.0896  -2.1205  -1.9695  -7.5924  -9.4034   3.3449   4.0674   2.1648
#>    0.7106  -0.8272  -7.2262  -5.4454  -7.2493  -1.2627  10.3142  -8.2408
#>   -4.9319  -2.3612  -9.6092 -14.9387  -8.4127  -0.8551  -2.4119  -9.4092
#>    3.1056  -4.7421  -1.5897   1.0372 -14.0019   4.1923   8.1081  -0.9687
#>   -6.6328  12.1926  -1.8403   0.5912  -6.5086  12.5544   2.9453 -18.4704
#>   -1.3925  -7.7967   3.8979  -3.6840   4.1140  -5.7330  19.1064   6.5537
#>   -1.9558  -3.3488   5.8775 -14.0649  16.4372  -2.4393  -1.5132 -19.2901
#>   -0.9390  -2.7121  -6.2544  -6.6693  -9.8632   5.0072  -2.3626  -6.0255
#>    1.1700  -3.3658  -2.2888  16.0395  -8.4148   4.8461  -4.9061   3.9041
#>   -1.7003  -0.9782  13.1990  -8.8022  -9.3029  -8.3303  -3.7003  -1.9959
#>   -6.5065  14.2878   7.1381   0.4472   3.5396  -3.1725   1.2212   6.3966
#>    2.4514   2.5781  11.3356  16.9954  -6.9314  -5.6187  -3.9537   4.7059
#>   -1.4350  -6.3739   0.7061   3.3706  12.7322  -8.8905 -10.2085   5.4558
#>   -2.8452  -2.0133  -2.2198 -16.1495  -5.5190 -30.6262   5.1324   2.1816
#>   -0.6637  -5.0911  -5.4530   7.5424 -18.6864   7.5104  -5.3324  10.9290
#>   -3.3986  -1.1037   0.6178  -2.7484 -13.9215   1.9658   3.1544  -3.0159
#>    5.7432  -6.8602  -2.7904   9.2068  17.4011 -14.9211  -7.0641  10.3138
#>    4.0683  -2.5613   9.4421   7.7749  10.2322   0.7396   1.2064 -14.1870
#>    2.3647   1.5983  -1.1869  -0.3526   4.6611   4.9450   7.6182   2.3824
#>  -11.0990   8.0551  -0.1668   1.5280  -7.0258   2.0969 -11.2803  -7.8665
#>   -1.5388   5.8533 -17.8767  -8.3429 -18.1840   4.3117  -6.2942  -2.3898
#>   -2.9700   7.7780  -3.2994  16.5755  -9.0383   4.4282   1.6417  -6.5740
#>    2.2099   0.5796   2.5021  -0.2225   3.5526  -0.6024   4.0746 -11.5851
#> ... [the output was truncated (use n=-1 to disable)]
#> [ CPUFloatType{20,33,54} ]