Design for Resource-Efficient Parallel Solution to Real-Data Sparse FFT
Abstract
Keith John Jones
The maximum size of data set being presented to the discrete Fourier transform (DFT) is becoming increasingly large to reflect the increasingly challenging problems being faced in today’s ‘big data’ era, in areas such as astronomy, medical imaging and the real-time spectrum analysis of multi GHz radio frequency signals for cognitive radio networks. Such problems are typically addressed by means of the fast Fourier transform (FFT), but there will always be data sets – typically real valued in nature – that are too large to be efficiently processed in real time with existing computing technology, so that alternative approaches are needed. The approach pursued here for the spectrum analysis problem assumes that a relatively small number of outputs are likely to contain detectable levels of signal energy with such signals being detected through the use of a sparse version of the FFT (sFFT). A flexible and scalable sFFT design has been sought for implementation with silicon-based computing technology that’s able to yield resource efficient low power solutions by maximizing the computational density through exploitation of both partitioned memory and the real valued nature of the data. A theoretical analysis shows how this may be achieved with a parameterized solution which, with a low-end field programmable gate array (FPGA) device, a 2 GHz sampling rate and a 100 MHz clock rate, is able to achieve a latency of < 1 ms for a 2 million point real-data sFFT together with low resource utilization and which compares favourably with other recently published FFT and sFFT solutions.