FPGAs – greener, faster computing?
The move towards greener computing, which uses less power, the emergence of a range of disruptive technologies, which support this move, and developments in the market for high-performance embedded systems have once again brought field programmable gate arrays (FPGAs) into focus
Introduced in the 1980s, an FPGA is a semiconductor device containing programmable logic components and programmable interconnects. The programmable logic components can be configured to emulate the functionality of basic logic gates and more complex arithmetic functions. A hierarchy of programmable interconnects allows the logic blocks of an FPGA to be interconnected flexibly. These blocks and interconnects can be programmed to perform whatever logical function is needed. Modern FPGAs have a number of logic gates ranging from tens of thousands to several millions.
FPGAs can be programmed at run-time with a particular algorithm or application. This is the basic idea behind reconfigurable computing. Typical applications include medical imaging, cryptography, financial simulations and bioinformatics. Traditionally FPGAs have been used in embedded devices where they have advantages over more conventional processors in terms of power consumption, flexibility, upgradability and where the effort needed to program them can be amortised over large numbers of units. However, FPGAs are being increasingly used in conventional high-performance computing applications where numerically-intensive computational kernels are run on the FPGA instead of on a conventional microprocessor.
Because of the large number of logic gates, the equivalent of many different processing units can be configured to operate simultaneously on the same problem. The inherent parallelism of the logic resources on a FPGA enables a considerable computational throughput even at a sub-500MHz clock rate. For example, the current generation of FPGAs can implement around 100 single precision floating point units per device, all of which can compute a result every clock cycle. The flexibility of an FPGA allows even higher performance by trading off precision and range in the number format for an increased number of parallel arithmetic units. For real-world, compute-intensive applications, FPGAs have the potential to provide orders-of-magnitude speedups over conventional processors with greatly reduced power consumption. Similar claims are being made for multi-core processors and for other disruptive technologies. The jury remains out on which technology or technologies will eventually prevail.
Traditionally, the embedded systems and electronic design communities have programmed FPGAs using hardware design languages such as VHDL and Verilog. This means of programming requires electronic design skills, particularly when hardware complexities relating to I/O, communications, memory access and host interfaces are taken into account. Whilst the use of hardware design languages has some advantages, particularly with respect to performance and resource utilisation, it is not readily accessible to most applications programmers. In recent years, this obstacle has been partially alleviated through the development of high-level language tools. These have been mostly based on subsets of ANSI C and require a high level of expertise to convert code and optimise it to extract the inherent instruction-level parallelism of the target FPGA. All of these tools suffer from the disadvantage that they cannot take an application written in standard C and convert it automatically into an optimised configuration of a target FPGA. To achieve this, substantial programmer intervention is still necessary.
Industry projections are that Moore's law (which relates to the density of gates, not performance) will remain valid for at least the next five years. This is because silicon fabrication technologies will continue to advance, resulting in devices with increasing numbers of gates. This applies to conventional processors, to multi-core processors which will supersede them and to FPGAs. What is clear, however, is that new programming models, languages and software tools will need to be developed to program both multi-core processors and FPGAs efficiently and flexibly. The advantage conventional microprocessors have held over FPGAs in terms of ease of programming is no longer certain in future.
In recent years, the numerical performance of FPGAs has shown a steady improvement with respect to conventional microprocessors. Firstly both types of device use the same fabrication technologies. Secondly, FPGA clock speeds are increasing more rapidly than those of microprocessors by a factor of approximately 20% per year. Thirdly optimisations of the logic through the incorporation of dedicated arithmetic units such as adders and multipliers have the potential to offer up to an order of magnitude in improved floating point performance. Consequently the numerical performance of FPGAs relative to conventional microprocessors has significant scope for improvement for the foreseeable future.
Although FPGAs have clear and sustainable performance advantages in terms of operations per second per unit area of silicon and in terms of operations per second per watt over mainstream microprocessors, ease of programming remains a major barrier to their mainstream use. Furthermore, at current sales volumes, the performance advantages of FPGAs are eroded by their higher unit costs than corresponding microprocessors. However, if FPGAs were more programmable, they would be more widely used both in high-performance and in general-purpose computing, their unit costs would fall and their price/performance would improve. Several initiatives world-wide are now addressing these issues to take the undoubted potential of FPGAs closer to market.
A notable example is a consortium of industrial and academic partners, led by the University of Edinburgh, which has formed an interest group called The FPGA High Performance Computing Alliance (FHPCA, http://www.fhpca.org). The FHPCA is pioneering the use of FPGAs in high-performance applications and have developed a 64-FPGA supercomputer called Maxwell together with three industrial demonstrators from the medical, financial and geophysical sectors. Through the experience gained in porting the industrial demonstrators, the FPHCA has clearly identified many of the issues related to programming FPGAs. It is now proposing initiatives to facilitate the use of FPGAs by the general-purpose applications programmer. If these issues can be successfully addressed, FPGAs will have a strong role to play in the wave of disruptive processor technologies set to take over from the static processor architecture of the last two decades.