Fft

torch_fft(self, signal_ndim, normalized = FALSE)

Arguments

self

(Tensor) the input tensor of at least signal_ndim + 1 dimensions

signal_ndim

(int) the number of dimensions in each signal. signal_ndim can only be 1, 2 or 3

normalized

(bool, optional) controls whether to return normalized results. Default: FALSE

Note

For CUDA tensors, an LRU cache is used for cuFFT plans to speed up
repeatedly running FFT methods on tensors of same geometry with same
configuration. See cufft-plan-cache for more details on how to
monitor and control the cache.

fft(input, signal_ndim, normalized=False) -> Tensor

Complex-to-complex Discrete Fourier Transform

This method computes the complex-to-complex discrete Fourier transform. Ignoring the batch dimensions, it computes the following expression:

$$ X[\omega_1, \dots, \omega_d] = \sum_{n_1=0}^{N_1-1} \dots \sum_{n_d=0}^{N_d-1} x[n_1, \dots, n_d] e^{-j\ 2 \pi \sum_{i=0}^d \frac{\omega_i n_i}{N_i}}, $$ where \(d\) = signal_ndim is number of dimensions for the signal, and \(N_i\) is the size of signal dimension \(i\).

This method supports 1D, 2D and 3D complex-to-complex transforms, indicated by signal_ndim. input must be a tensor with last dimension of size 2, representing the real and imaginary components of complex numbers, and should have at least signal_ndim + 1 dimensions with optionally arbitrary number of leading batch dimensions. If normalized is set to TRUE, this normalizes the result by dividing it with \(\sqrt{\prod_{i=1}^K N_i}\) so that the operator is unitary.

Returns the real and the imaginary parts together as one tensor of the same shape of input.

The inverse of this function is torch_ifft.

Warning

For CPU tensors, this method is currently only available with MKL. Use torch_backends.mkl.is_available to check if MKL is installed.

Examples

if (torch_is_installed()) { # unbatched 2D FFT x = torch_randn(c(4, 3, 2)) torch_fft(x, 2) # batched 1D FFT torch_fft(x, 1) # arbitrary number of batch dimensions, 2D FFT x = torch_randn(c(3, 3, 5, 5, 2)) torch_fft(x, 2) }
#> torch_tensor #> (1,1,1,.,.) = #> -2.3428 -8.8931 #> -0.9329 2.5940 #> 1.7995 -10.8052 #> -1.1522 -1.0306 #> -4.7890 3.4165 #> #> (2,1,1,.,.) = #> 4.3706 -6.0271 #> 3.2724 -6.5294 #> 2.1472 0.2018 #> -6.2654 8.1258 #> 8.1230 -2.0722 #> #> (3,1,1,.,.) = #> -9.4490 -0.3329 #> 1.2606 -2.8233 #> 2.9336 -2.3833 #> -0.7245 1.5270 #> -4.3890 6.6481 #> #> (1,2,1,.,.) = #> 2.8897 -0.2716 #> 0.1113 4.7869 #> 3.4957 8.8017 #> 6.7336 -2.2698 #> -5.2185 1.3936 #> #> (2,2,1,.,.) = #> 4.5331 -11.0508 #> ... [the output was truncated (use n=-1 to disable)] #> [ CPUFloatType{3,3,5,5,2} ]