hadp2018

Hardware Architectures for Machine Learning - Spring 2018

NEWS: 

----

----

Overview

The seminar is intended to cover recent results in the increasingly important field of hardware acceleration for data science, both in dedicated machines or in data centers. The seminar aims at students interested in the system aspects of data processing who are willing to bridge the gap across traditional disciplines: machine learning, databases, systems, and computer architecture. The seminar should be of special interest to students interested in completing a master thesis or even a doctoral dissertation in related topics. 

Format

The seminar will start on February 22nd with an overview of the general topics and the intended format of the seminar. Students are expected to present one paper in a 30 minute talk and complete a 4 page report on the main idea of the paper and how they relate to the other papers presented at the seminar and the discussions around those papers. The presentation will be given during the semester in the allocated time slot. The report is due on the last day of the semester.

Attendance to the seminar is mandatory to complete the credit requirements. Active participation is also expected, including having read every paper to be presented in advance and contributing to the questions and discussions of each paper during the seminar.


Course Material


Talks

 

Speaker Title Date/Time
Prof. Torsten Hoefler How to Survive in This Seminar 22nd Febrauary, 15:00
Dr. Muhen Owaida Data Processing on Hybrid CPU-FPGA Platforms 22nd February, 16:00
Dr. Tal Ben Nun Parallel and Distributed Deep Learning
Paper
1st March, 15:00
Prof. Ce Zhang   8th March, 15:00
Prof. Onur Mutlu  Accelerating Genome Analysis: A Primer on an Ongoing Journey 8th March, 16:00

 Schedule

NAME PAPER DATE MENTOR
 Brunecker Oliver Google Workloads for Consumer Devices: Mitigating Data Movement Bottlenecks.   15 March  Onur Mutlu
 Stephen Muller High-Performance Recommender System Training Using Co-Clustering on CPU/GPU Clusters.  15 March Muhsen Owaida 
 Somm Luca PRIME: a novel processing-in-memory architecture for neural network computation in ReRAM-based main memory.  22 March  Onur Mutlu
 Yang Zhifei KV-Direct: High-Performance In-Memory Key-Value Store with Programmable NIC.  22 March Gustavo Alonso 
 Aljoscha von Bismarck Deep Learning at 15PF.  29 March  Tal Ben Nun
 Michael Reto Graph analytics and storage.  29 March  Tal Ben Nun
 Chris Mnuk Poseidon: An Efficient Communication Architecture for Distributed Deep Learning on GPU Clusters.   12 April  Torsten Hoefler
 De Rita Nicolo SCNN: An Accelerator for Compressed-sparse Convolutional Neural Networks.   12 April  Ce Zhang
 Ursache Andrei GraphBIG: Understanding Graph Computing in the Context of Industrial Solutions.   19 April  Muhsen Owaida
 Tyukhova Alina BlueDBM: Distributed Flash Storage for Big Data Analytics.   19 April Ce Zhang
 Cruceru Calin QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding.  26 April  Torsten Hoefler
 Bloch Leonid Caribou: Intelligent Distributed Storage.  26 April  Gustavo Alonso
       
       

 


Seminar Hours

Thursdays, 15:00-17:00 in LEE C 104

Lecturers: