Machine Learning on FPGAs

Machine learning algorithms benefit from inherent parallelism, deep pipelining and custom precision capabilities that an FPGA provides.


Linear Model Training with ZipML

In our ZipML framework, we are exploring these ideas to accelerate training and inference.

Our FCCM'17 paper (FPGA-accelerated Dense Linear Machine Learning:
A Precision-Convergence Trade-off) studies how to accelerate training of dense linear models, using low-precision data and performing the training on an FPGA. We present both single-precision floating-point and low-precision integer (8, 4, 2 and 1 bit) capable FPGA-based trainers, and study the trade-offs that effect the end-to-end performance of dense linear model training. You can grab both software and FPGA designs presented in this work in our repository:


Inference of Decision Tree Ensemble on CPU-FPGA Platforms

In this work we developed an inference system for decision tree ensembles on Intel's Xeon+FPGA platform. In our FPL'17 paper we explore the design of flexible and scalable FPGA architecture. In addition we combine CPU and FPGA processing to scale to large tree ensembles with millions of nodes. The developed system targets XGBoost, one of the most successful boosted trees algorithms in machine learning. In our future steps, we want to explore using low precision for representing either data or tree node's threshold values, which enables the processing of even larger ensembles on the FPGA at higher performance. 





Kaan Kara, Dan Alistarh, Gustavo Alonso, Onu Mutlu, Ce Zhang
IEEE 25th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM), May 2017
Muhsen Owaida, Hantian Zhang, Ce Zhang, Gustavo Alonso.
IEEE 27th International Conference on Field-Programmable Logic and Applications (FPL), September 2017, Ghent, Belgium.