Skip to contents

Conv_transpose1d

Usage

torch_conv_transpose1d(
  input,
  weight,
  bias = list(),
  stride = 1L,
  padding = 0L,
  output_padding = 0L,
  groups = 1L,
  dilation = 1L
)

Arguments

input

input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iW)\)

weight

filters of shape \((\mbox{in\_channels} , \frac{\mbox{out\_channels}}{\mbox{groups}} , kW)\)

bias

optional bias of shape \((\mbox{out\_channels})\). Default: NULL

stride

the stride of the convolving kernel. Can be a single number or a tuple (sW,). Default: 1

padding

dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padW,). Default: 0

output_padding

additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padW). Default: 0

groups

split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

dilation

the spacing between kernel elements. Can be a single number or a tuple (dW,). Default: 1

conv_transpose1d(input, weight, bias=NULL, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor

Applies a 1D transposed convolution operator over an input signal composed of several input planes, sometimes also called "deconvolution".

See nn_conv_transpose1d() for details and output shape.

Examples

if (torch_is_installed()) {

inputs = torch_randn(c(20, 16, 50))
weights = torch_randn(c(16, 33, 5))
nnf_conv_transpose1d(inputs, weights)
}
#> torch_tensor
#> (1,.,.) = 
#>  Columns 1 to 6  5.2541e+00 -2.0668e+00  1.0623e+00 -1.2802e+00 -1.0578e+01  4.6613e+00
#>  -9.3781e-01 -9.6152e+00 -3.5969e+00 -5.7889e+00 -1.1217e+01  1.1356e+01
#>   5.2971e+00 -1.3500e+01  1.5410e+01 -3.9218e+00 -8.5270e+00  1.4976e+01
#>  -9.8344e+00 -8.5735e+00  4.0113e+00 -4.2230e+00  1.7653e+01 -1.3448e+01
#>  -5.6829e+00  1.5947e+00  3.2126e+00 -7.0778e+00 -8.9623e-01  3.9162e+00
#>   6.2949e+00 -3.7269e-01  3.3599e+00  2.1149e+00  2.4228e+00  3.9571e+00
#>  -6.2083e+00  1.0025e+01  9.4139e-02 -2.1641e+01  9.3871e+00 -7.9130e+00
#>   4.0801e-01 -1.5610e+01  7.5275e+00  3.1077e+00 -3.4684e+00  1.0770e+01
#>   7.5022e+00  4.0125e-02  1.3258e+00  1.9677e+00 -9.6444e+00  7.0841e-02
#>   1.1762e+01 -9.5921e+00  1.5677e+01 -1.0776e+01  1.0770e+01  6.9177e+00
#>   2.9540e+00 -1.0420e+01 -2.3489e+00  7.0055e+00  3.1187e+00 -7.2584e+00
#>   2.6843e+00  2.6946e+00 -1.6968e+00  1.0638e+01 -1.2575e+01 -5.6474e+00
#>  -1.7625e+00  8.9843e+00  3.2183e+00 -7.9188e+00  1.3395e+01 -5.5159e+00
#>   1.9291e-01  1.0495e+01  7.9685e-01  2.7018e+00  1.6993e-01 -3.4039e+00
#>  -4.5074e+00 -4.8730e+00  7.5460e+00 -8.3048e+00 -4.9748e+00 -2.6284e+00
#>   9.3630e+00  1.5203e+01  3.6573e+00  5.3076e+00 -1.8882e+01 -5.6262e+00
#>   3.5361e-01 -6.5436e+00  6.7610e+00  4.0246e+00  6.9639e+00 -4.7238e+00
#>   4.2562e+00 -8.3618e+00  6.0051e+00 -5.6847e+00  1.3941e+00 -8.6367e+00
#>  -1.2052e+01  4.6459e+00  7.5283e+00  6.8172e+00  8.6650e+00  6.7877e+00
#>   2.0467e+00 -2.9341e+00  9.6027e+00 -9.1858e+00  7.0233e+00 -1.5314e+01
#>   1.1845e+00  5.6744e+00  1.0976e+01  2.8841e+00  1.0444e+00  2.5388e-01
#>  -7.8691e+00 -3.3863e+00 -7.3923e+00  1.3705e+01  6.7138e+00  3.1078e+00
#>   3.8501e+00 -4.5830e+00  6.8409e+00  4.4484e-01  7.4642e+00  6.1101e+00
#>  -1.9269e+00  4.3496e+00 -5.4194e+00  8.0312e+00  1.1284e+01  3.2208e-01
#>  -8.4366e+00  8.0003e+00 -2.9822e+00 -3.8303e+00  2.5436e+00 -6.4539e+00
#>   4.4716e+00 -1.1458e+01  4.1074e+00 -2.8760e+00  3.9034e-01 -5.7900e+00
#>  -5.0239e+00 -3.3466e+00 -3.4317e+00 -7.0105e+00  6.4688e+00 -4.7703e+00
#>   7.6602e+00 -9.0977e-01 -3.0971e-01 -4.1604e+00 -3.3196e+01 -4.8019e+00
#>   3.8746e+00 -5.6059e+00 -1.4430e+00  6.5607e+00 -8.1265e+00 -3.3211e+00
#> ... [the output was truncated (use n=-1 to disable)]
#> [ CPUFloatType{20,33,54} ]