Skip to contents

Conv_transpose1d

Usage

torch_conv_transpose1d(
  input,
  weight,
  bias = list(),
  stride = 1L,
  padding = 0L,
  output_padding = 0L,
  groups = 1L,
  dilation = 1L
)

Arguments

input

input tensor of shape \((\mbox{minibatch} , \mbox{in\_channels} , iW)\)

weight

filters of shape \((\mbox{in\_channels} , \frac{\mbox{out\_channels}}{\mbox{groups}} , kW)\)

bias

optional bias of shape \((\mbox{out\_channels})\). Default: NULL

stride

the stride of the convolving kernel. Can be a single number or a tuple (sW,). Default: 1

padding

dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padW,). Default: 0

output_padding

additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padW). Default: 0

groups

split input into groups, \(\mbox{in\_channels}\) should be divisible by the number of groups. Default: 1

dilation

the spacing between kernel elements. Can be a single number or a tuple (dW,). Default: 1

conv_transpose1d(input, weight, bias=NULL, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor

Applies a 1D transposed convolution operator over an input signal composed of several input planes, sometimes also called "deconvolution".

See nn_conv_transpose1d() for details and output shape.

Examples

if (torch_is_installed()) {

inputs = torch_randn(c(20, 16, 50))
weights = torch_randn(c(16, 33, 5))
nnf_conv_transpose1d(inputs, weights)
}
#> torch_tensor
#> (1,.,.) = 
#>  Columns 1 to 6 -4.5590e+00 -3.6157e+00 -1.5995e+01 -2.6302e-01  7.8010e+00 -7.6253e+00
#>   1.0087e+00 -2.5526e+00  7.5468e+00  9.2595e-01  6.9462e+00  6.0876e+00
#>  -1.6047e+00  1.1309e+01 -1.2246e+00 -1.1724e+01  3.0614e+00 -3.8553e+00
#>  -1.9934e+00 -9.0566e+00 -6.0149e+00 -6.9119e+00  1.0916e+01  5.5529e+00
#>  -1.6136e-01  1.6686e+00  2.6446e-01 -8.9720e+00 -6.9535e-01 -1.8312e+01
#>   3.1942e+00 -1.1847e+01 -5.9132e+00  4.8085e+00 -3.4559e+00 -5.1355e+00
#>  -5.9551e-01  2.0413e+00  5.9313e-01  7.2394e+00 -4.5744e+00  1.7536e+00
#>  -5.9577e+00 -6.5645e-01  4.8830e+00  7.8732e+00  6.7515e+00 -1.2300e+01
#>   3.7813e+00  2.8785e+00  8.2800e+00  6.3688e-01  3.3223e+00  3.3147e+00
#>  -4.5054e+00  4.4753e+00  1.6109e+00  7.0146e+00  4.6539e-01 -1.2591e+01
#>  -9.9306e-01 -4.5217e+00 -2.5369e-01 -3.3427e+00 -4.0175e+00  5.0010e+00
#>  -3.6855e+00  2.5196e-01 -4.7192e+00 -8.4735e+00  5.2678e+00 -1.0826e+01
#>   2.0207e+00 -3.4159e-02 -6.0277e+00 -2.7943e-01  3.4112e+00 -3.7213e+00
#>   9.6600e-01 -1.1528e+00  2.4179e+00 -5.6259e+00 -4.8071e+00 -4.3573e+00
#>  -1.4543e+00 -3.1097e+00  3.6490e+00 -3.6019e-01 -1.0719e+00 -1.0138e+01
#>  -6.0724e+00 -6.8304e+00  7.8481e+00 -7.7227e+00 -1.3705e+01  1.1541e+01
#>   8.4789e-01  6.7280e+00 -2.3100e+00  3.9121e-01  9.0779e+00  1.1022e+01
#>   2.9938e+00 -4.0471e+00 -5.7521e+00  7.0884e+00  8.0573e+00 -8.6808e-01
#>  -1.1712e+01  4.8490e+00 -1.1712e+01  1.7699e+00  5.2789e+00 -1.5008e+00
#>  -4.1916e+00  5.7997e+00  1.9118e+00 -1.2406e+01 -1.5794e+00 -1.0002e+00
#>  -4.8675e+00 -3.3393e+00 -2.7013e+00 -8.2935e+00 -2.4946e-01 -4.1678e+00
#>  -2.8545e+00 -3.4270e-01  6.0899e+00  8.4625e+00  2.3614e+00 -3.1104e+00
#>   1.6071e+00 -8.0299e+00  3.2442e+00  8.1035e+00  1.1609e+01  8.8393e+00
#>  -8.6862e+00  5.3770e+00 -1.5401e+00  3.3190e+00  4.2553e+00 -8.2400e+00
#>   5.3247e-02  7.1692e+00 -1.1295e+01  1.3654e+01  4.3366e-01  4.7947e+00
#>  -3.8747e+00 -5.7438e+00 -3.6506e+00 -1.3831e+01  2.7401e+00 -5.3043e+00
#>  -2.5487e+00 -3.5727e+00 -8.8847e-01  6.9657e+00  9.8352e+00 -1.7957e+00
#>   4.0339e+00 -1.4266e+00  5.7051e+00 -3.8373e+00  7.2681e+00  4.3252e+00
#>  -7.3684e-01 -3.6297e-01 -2.9092e+00 -5.9178e-01  1.0972e+00 -6.1112e+00
#> ... [the output was truncated (use n=-1 to disable)]
#> [ CPUFloatType{20,33,54} ]