Skip to contents

Conv_transpose1d

Usage

torch_conv_transpose1d(
  input,
  weight,
  bias = list(),
  stride = 1L,
  padding = 0L,
  output_padding = 0L,
  groups = 1L,
  dilation = 1L
)

Arguments

input

input tensor of shape (minibatch,in\_channels,iW)

weight

filters of shape (in\_channels,out\_channelsgroups,kW)

bias

optional bias of shape (out\_channels). Default: NULL

stride

the stride of the convolving kernel. Can be a single number or a tuple (sW,). Default: 1

padding

dilation * (kernel_size - 1) - padding zero-padding will be added to both sides of each dimension in the input. Can be a single number or a tuple (padW,). Default: 0

output_padding

additional size added to one side of each dimension in the output shape. Can be a single number or a tuple (out_padW). Default: 0

groups

split input into groups, in\_channels should be divisible by the number of groups. Default: 1

dilation

the spacing between kernel elements. Can be a single number or a tuple (dW,). Default: 1

conv_transpose1d(input, weight, bias=NULL, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor

Applies a 1D transposed convolution operator over an input signal composed of several input planes, sometimes also called "deconvolution".

See nn_conv_transpose1d() for details and output shape.

Examples

if (torch_is_installed()) {

inputs = torch_randn(c(20, 16, 50))
weights = torch_randn(c(16, 33, 5))
nnf_conv_transpose1d(inputs, weights)
}
#> torch_tensor
#> (1,.,.) = 
#>  Columns 1 to 6  2.1065e+00  4.2545e+00 -7.2812e+00  9.3095e+00 -1.1119e+01  6.0161e+00
#>  -3.4632e+00  1.2377e+01  1.2267e+00 -7.7795e+00  2.7676e+00 -1.3448e+00
#>   2.5015e+00  3.7745e+00  7.2036e+00  3.1809e+00 -8.9164e+00  2.0263e+00
#>  -6.8297e+00 -8.7407e+00 -1.3858e+01  1.0621e+01  1.3752e+01 -3.5345e-01
#>   1.3619e+00  6.3470e+00  9.5993e+00  3.4359e+00  4.7840e+00  4.3021e+00
#>  -5.3539e-01 -5.2455e+00  2.1651e+00  2.6303e-01  5.7846e+00  1.5671e+01
#>   3.8814e+00  8.5034e+00 -3.0298e+00 -7.5409e+00 -2.0862e+00 -1.2639e+01
#>   8.3029e+00 -4.9851e+00  1.4344e+01  1.1211e+01  9.6320e+00 -1.1655e+01
#>  -5.3246e+00  5.5726e+00  6.9331e+00 -1.5614e+00 -2.5315e+00  5.3770e+00
#>  -2.8534e+00 -4.8446e+00  8.2376e+00  5.3437e+00  1.4911e+01  4.6172e+00
#>  -1.4600e+00 -2.7250e-01 -5.8968e+00 -1.7803e+00  1.5687e+01  1.4063e+01
#>   5.5213e+00  5.8617e+00  1.5800e+01 -6.7872e-01  4.7496e+00  1.0845e+01
#>  -5.2515e+00 -9.4890e+00 -2.2794e+00  1.2049e+01  4.7434e+00  2.3950e+00
#>   2.0024e+00  1.7202e+00 -8.8798e+00  8.6475e+00 -1.1650e+01 -1.6203e+01
#>   7.5844e+00 -5.4997e-01 -1.2895e+01  6.3570e+00 -2.8748e-01  8.4130e+00
#>  -5.3229e+00  1.0014e+01 -2.2845e+00  1.7580e+00 -4.5975e+00 -1.0274e+01
#>   5.6174e+00 -3.0126e-01 -3.3910e+00 -1.1294e+01 -5.6563e+00 -6.7511e+00
#>  -4.3260e+00  2.7836e+00 -2.3220e+00  5.3535e+00  3.2402e+00  2.0657e+01
#>   6.2748e+00  2.9895e+00  1.2423e+00 -2.6458e+00  4.8733e+00  8.7331e-01
#>  -4.0596e+00  1.2340e+01 -2.1270e+01  4.7890e+00 -7.7019e+00  1.4417e+01
#>   7.4039e+00  3.1946e+00 -1.2608e+01 -1.2005e+01 -2.6921e+00  1.1020e+01
#>  -2.7700e+00 -3.6290e+00 -5.0030e+00  4.4270e+00 -6.5011e+00  5.2694e+00
#>  -7.9219e+00 -6.8214e+00 -1.1008e+01 -2.2366e+00  7.5697e-01 -8.6133e-01
#>   5.8025e+00 -1.4334e-01 -1.0507e+01 -2.8326e+00 -8.1040e+00  7.3243e+00
#>  -3.1109e+00  1.8539e+00 -1.8897e-01  4.6636e+00  1.3963e+01 -4.2291e+00
#>   4.2664e+00 -3.4181e+00 -1.1691e+01  8.1366e+00  6.8679e-01 -8.4025e+00
#>   4.5487e-01  2.2341e+00 -5.8664e+00  6.4070e+00 -7.9101e+00  9.7662e+00
#>  -9.4078e-01 -2.6026e+00 -4.4460e+00  1.5725e+00 -5.2829e+00 -7.5777e+00
#>   1.7187e+00  1.8971e+00  1.6945e+00  7.0668e+00 -9.6171e+00  6.7587e+00
#> ... [the output was truncated (use n=-1 to disable)]
#> [ CPUFloatType{20,33,54} ]