Reconfigurable Computing Systems – Speeds & Feeds
Recorded 16 December 2016 in Lausanne, Vaud, Switzerland
Event: KTN - Know Thy Neighbor
With the end of Denard scaling and the imminent end of Moore’s Law, there is increasing interest in making better use of the billions of transistors on modern chips, so we can continue to address new and challenging problems that require large amounts of computation. Reconfigurable computing, which uses programming logic devices such as FPGAs, is attracting increased attention. At Microsoft, I helped build Catapult, an FPGA-accelerator for the Bing search engine, which increased its performance two-fold and led to the widespread adoption of FPGAs in Microsoft’s data centres.
The process of building Catapult exposed some significant challenges, which are a focus of my research. FPGAs are not a one-for-one replacement for processors, since programming an FPGA is closer to hardware design than software development. Also, FPGAs are low-level devices that need to be integrated into a system, so that data is delivered at a sufficient speed, failures are handled, and tradeoffs involving cost and power are appropriate. And, finally, reconfigurable computing is not appropriate for all problems, so how do you evaluate its potential benefit before building a full system? Widespread adoption of reconfigurable computing awaits better solutions to these challenges.
With Ed Bugnion and several students, I am investigating hardware acceleration for genomic data processing. This is a new application domain with large amounts of data and significant computation, an excellent domain to explore solutions to the problems of reconfigurable computing and also to build practical systems to accelerate research in personalized medicine.
Watched 791 times.Watch