Ifft
torch_ifft(self, signal_ndim, normalized = FALSE)
self | (Tensor) the input tensor of at least |
---|---|
signal_ndim | (int) the number of dimensions in each signal. |
normalized | (bool, optional) controls whether to return normalized results. Default: |
For CUDA tensors, an LRU cache is used for cuFFT plans to speed up repeatedly running FFT methods on tensors of same geometry with same configuration. See cufft-plan-cache for more details on how to monitor and control the cache.
Complex-to-complex Inverse Discrete Fourier Transform
This method computes the complex-to-complex inverse discrete Fourier transform. Ignoring the batch dimensions, it computes the following expression:
$$
X[\omega_1, \dots, \omega_d] =
\frac{1}{\prod_{i=1}^d N_i} \sum_{n_1=0}^{N_1-1} \dots \sum_{n_d=0}^{N_d-1} x[n_1, \dots, n_d]
e^{\ j\ 2 \pi \sum_{i=0}^d \frac{\omega_i n_i}{N_i}},
$$
where \(d\) = signal_ndim
is number of dimensions for the
signal, and \(N_i\) is the size of signal dimension \(i\).
The argument specifications are almost identical with torch_fft
.
However, if normalized
is set to TRUE
, this instead returns the
results multiplied by \(\sqrt{\prod_{i=1}^d N_i}\), to become a unitary
operator. Therefore, to invert a torch_fft
, the normalized
argument should be set identically for torch_fft
.
Returns the real and the imaginary parts together as one tensor of the same
shape of input
.
The inverse of this function is torch_fft
.
For CPU tensors, this method is currently only available with MKL. Use
torch_backends.mkl.is_available
to check if MKL is installed.
if (torch_is_installed()) { x = torch_randn(c(3, 3, 2)) x y = torch_fft(x, 2) torch_ifft(y, 2) # recover x }#> torch_tensor #> (1,.,.) = #> -0.1112 0.1976 #> -0.6745 -0.6398 #> 0.5422 -0.5937 #> #> (2,.,.) = #> -1.9813 0.5721 #> 2.0184 -0.7911 #> 1.0326 -0.0483 #> #> (3,.,.) = #> 0.2098 0.7124 #> 0.9631 0.2883 #> -1.9407 -0.0044 #> [ CPUFloatType{3,3,2} ]