Select event terms to filter by
Thursday January 31, 2019
Start: 31.01.2019 10:00

Thursday, 31. January 2019, 10:00-11:00 in
CAB E 72

Speaker: Irene Zhang (Microsoft Research, Redmond)

Title: Demikernel: An Operating System Architecture for Hardware-Accelerated Datacenter Servers





As I/O devices become faster, the CPU is increasingly a bottleneck in today's datacenter servers. As a result, servers now integrate a variety of I/O accelerators -- I/O devices with an attached computational unit -- to offload functionality from the CPU (e.g., RDMA, DPDK and SPDK devices). More specifically, many of these devices improve performance by eliminating the operating system kernel from the I/O processing path. This change has left a gap in the datacenter systems stack: there is no longer a general-purpose, device-independent I/O abstraction. Instead, programmers build their applications against low-level device-specific interfaces, which are difficult to use and not portable.

This talk presents the Demikernel, a new operating system architecture for datacenter servers. Demikernel operating systems are split into a control-path kernel and a data-path library OS, which provides a new device-agnostic I/O abstraction for datacenter servers. Each Demikernel library OS implements this I/O abstraction in a device-specific way by offloading some functions to the device and implementing the remainder on the CPU. In this way, datacenter applications can use a high-level interface for I/O that works across a range of I/O accelerators without application modification.


Irene Zhang is a researcher at Microsoft Research Redmond. Her current research focuses on new operating systems for datacenter servers and mobile devices. She recently received her PhD from the University of Washington, advised by Hank Levy and Arvind Krishnamurthy. Her thesis focused on distributed programming systems for wide-area applications. She is this year's recipient of the Dennis M. Ritchie SIGOPS dissertation award.



Monday February 18, 2019
Start: 18.02.2019 17:15

CAB G 61

Speaker:  Tom Anderson (University of Washington)

Title: A Case for An Open Source CS Curriculum

Host: Timothy Roscoe


Despite rapidly increasing enrollment in CS courses, the academic CS community is failing to keep pace with demand for trained CS students. Further, the knowledge of how to teach students up to the state of the art is increasingly segregated into a small cohort of schools who mostly cater to students from families in the top 10% of the income distribution. Even in the best case, those schools lack the aggregate capacity to teach more than a small fraction of the nation's need for engineers and computer scientists. MOOCs can help, but they are mainly effective at retraining existing college graduates. In practice, most low and middle income students need a human teacher. In this talk I argue for building an open source CS curriculum, with autograded projects, instructional software, textbooks, and slideware, as an aid for teachers who want to improve the education in advanced CS topics at schools attended by the children of the 90%. I will give as an example our work on replicating teaching advanced operating systems and distributed systems.


Tom Anderson is the Warren Francis and Wilma Kolm Bradley Chair in the Paul G. Allen School of Computer Science and Engineering at the University of Washington. His research interests span all aspects of building practical, robust, and efficient computer systems, including distributed systems, operating systems, computer networks, multiprocessors, and security. He is a member of the National Academy of Engineering and the American Academy of Arts and Sciences, as well as winner of the USENIX Lifetime Achievement Award, the USENIX STUG Award, the IEEE Koji Kobayashi Computer and Communications Award, the ACM SIGOPS Mark Weiser Award, and the IEEE Communications Society William R. Bennett Prize. He is also an ACM Fellow, past program chair of SIGCOMM and SOSP, and he has co-authored twenty-one award papers and one widely used undergraduate textbook:

Thursday February 21, 2019
Start: 21.02.2019 10:00

Thursday, 21. February 2019, 10:00-11:00 in
CAB E 72

Speaker: Thomas Würthinger  (Oracle Labs)

Title: Bringing the Code to the Data with GraalVM



High-performance language runtimes often execute isolated from datastores. Encoding logic in the form of stored procedures requires relying on different execution engines and sometimes even different languages. Our vision of the future of execution runtimes is GraalVM: an integrated, polyglot, high-performance execution environment that can not only run stand-alone but also efficiently embedded in other systems. It supports shared tooling independent of the specific language and specific embedding. We designed the GraalVM runtime with complete separation of logical and physical data layout in mind. This allows direct access to custom data formats without marshalling overheads. GraalVM supports dynamic languages such as JavaScript, Ruby, Python and R. Additionally, even lower level languages such as C, C++, Go, and Rust are integrated into the ecosystem via LLVM bitcode and can execute in a sandboxed and secure manner. We believe this language-level virtualisation will provide major benefits for system performance and developer productivity.


Thomas Wuerthinger is researcher at Oracle Labs Switzerland. His research interests include Virtual Machines, Feedback-directed Runtime Optimizations, and Static Program Analysis. His current focus is the Graal project that aims at developing a new dynamic compiler for Java. Additionally, he is the architect of the Truffle self-optimizing runtime system, which uses partial evaluation for automatically deriving high-performance compiled code from AST interpreters. Before joining Oracle Labs, he has worked on the IdealGraphVisualizer, the Crankshaft/V8 optimizing compiler, and the Dynamic Code Evolution VM. He received a PhD degree from the Johannes Kepler University Linz.





Thursday February 28, 2019
Start: 28.02.2019 10:00


Thursday, 28. February 2019, 10:00-11:00 in CAB E 72

Speaker: Alberto Lerner (University of Fribourg, Switzerland)

Title: The Case for Network-Accelerated Query Processing







The fastest plans in MPP databases are usually those with the least amount of data movement across nodes, as data is not processed while in transit. The network switches that connect MPP nodes are hard-wired to perform packet-forwarding logic only. However, in a recent paradigm shift, network devices are becoming “programmable.” The quotes here are cautionary. Switches are not becoming general purpose computers (just yet). But now the set of tasks they can perform can be encoded in software.

In this talk we explore this programmability to accelerate OLAP queries. We found that we can offload onto the switch some very common and expensive query patterns. Moving data through networking equipment can hence for the first time contribute to query execution. Our preliminary results show that we can improve response times on even the best agreed upon plans by more than 2x using 25 Gbps networks. We also see the promise of linear performance improvement with faster speeds. The use of programmable switches can open new possibilities of architecting rack- and datacenter-sized database systems, with implications across the stack.


Alberto Lerner is a Senior Researcher at the eXascale Infolab at the University of Fribourg, Switzerland. His interests include systems that explore closely coupling of hardware and software in order to realize untapped performance and/or functionality. Previously, he spent years in the industry consulting for large, data-hungry verticals such as finance and advertisement. He had also been part of the teams behind a few different database engines: IBM's DB2, working on robustness aspects of the query optimizer, Google's Bigtable, on elasticity aspects, and MongoDB, on general architecture. Alberto received his Ph.D. from ENST - Paris (now ParisTech), having done his thesis research work at INRIA/Rocquencourt and NYU. He's also done post-doctoral work at IBM Research (both at T.J. Watson and Almaden). 




Thursday March 21, 2019
Start: 21.03.2019 13:30

Thursday, 21. March 2019, 13:30-14:30 in CAB E 72

Speaker: Marko Vukolic (IBM Research)

Title: Hyperledger Fabric: a Distributed Operating System for Permissioned Blockchains




Fabric is a modular and extensible open-source system for deploying and operating permissioned blockchains and one of the Hyperledger projects hosted by the Linux Foundation ( Fabric supports modular consensus protocols, which allows the system to be tailored to particular use cases and trust models. Fabric is also the first blockchain system that runs distributed applications written in standard, general-purpose programming languages, without systemic dependency on a native cryptocurrency. This stands in sharp contrast to existing blockchain platforms that require "smart-contracts" to be written in domain-specific languages or rely on a cryptocurrency. Fabric realizes the permissioned model using a portable notion of membership, which may be integrated with industry-standard identity management. To support such flexibility, Fabric introduces an entirely novel blockchain design and revamps the way blockchains cope with non-determinism, resource exhaustion, and performance attacks. Although not yet performance-optimized, Fabric achieves, in certain popular deployment configurations, end-to-end throughput of more than 3500 transactions per second (of a Bitcoin-inspired digital currency), with sub-second latency, scaling well to over 100 peers. In this talk we discuss Hyperledger Fabric architecture, detailing the rationale behind various design decisions. We also briefly discuss distributed ledger technology (DLT) use cases to which Hyperledger Fabric is relevant, including financial industry, manufacturing industry, supply chain management, government use cases and many more.

Short Biography:

Dr. Marko Vukolić is a Research Staff Member at Blockchain and Industry Platforms group at IBM Research - Zurich. Previously, he was a faculty at EURECOM and a visiting faculty at ETH Zurich. He received his PhD in distributed systems from EPFL in 2008 and his dipl. ing. degree in telecommunications from University of Belgrade in 2001. His research interests lie in the broad area of distributed systems, including blockchain and distributed ledgers, cloud computing security, distributed storage and fault-tolerance.




Thursday March 28, 2019
Start: 28.03.2019 10:00

Thursday, 28. March 2019, 10:00-11:00 in CAB E 72

Speaker: Theo Rekatsinas (University of Wisconsin)

Title: A Machine Learning Perspective on Managing Noisy Data




Modern analytics are very dependent on high-effort tasks like data preparation and data cleaning to produce accurate results. It is for this reason that the vast majority of the time devoted on analytics projects is spent on high-effort tasks like data preparation and data cleaning.

This talk describes recent work on making routine data preparation tasks dramatically easier. I will first introduce a noisy channel model to describe the quality of structured data and demonstrate how most work on noisy data management by the database community can be cast as a statistical learning and inference problem. I will then show how this noisy channel model forms the basis of HoloClean, a weakly supervised ML system for automated data cleaning. I will close with additional examples of how a statistical learning view can lead to new insights and solutions to classical database problems such as constraint discovery and consistent query answering.

Short Bio:

Theodoros (Theo) Rekatsinas is an Assistant Professor in the Department of Computer Sciences at the University of Wisconsin-Madison. He is a member of the Database Group. He earned his Ph.D. in Computer Science from the University of Maryland and was a Moore Data Postdoctoral Fellow at Stanford University. His research interests are in data management, with a focus on data integration, data cleaning, and uncertain data. Theo's work has been recognized with an Amazon Research Award in 2018, a Best Paper Award at SDM 2015, and the Larry S. Davis Doctoral Dissertation award in 2015.





Wednesday April 24, 2019
Start: 24.04.2019 15:00

CAB G 51

 Moritz Hoffmann - PhD Defense: Managing and understanding distributed stream processing

Thursday April 25, 2019
Start: 25.04.2019 10:00

Thursday, 25. April 2019, 10:00-11:00 in CAB E 72

Speaker: Peter Pietzuch (Imperial College London)

Title: Scaling Deep Learning on Multi-GPU Servers





With the widespread availability of GPU servers, scalability in terms of the number of GPUs when training deep learning models becomes a paramount concern. For many deep learning models, there is a scalability challenge: to keep multiple GPUs fully utilised, the batch size must be sufficiently large, but a large batch size slows down model convergence due to the less frequent model updates.

In this talk, I describe CrossBow, a new single-server multi-GPU deep learning system that avoids the above trade-off. CrossBow trains multiple model replicas concurrently on each GPU, thereby avoiding under-utilisation of GPUs even when the preferred batch size is small. For this, CrossBow (i) decides on an appropriate number of model replicas per GPU and (ii) employs an efficient and scalable synchronisation scheme within and across GPUs.

Short Bio:

Peter Pietzuch is a Professor at Imperial College London, where he leads the Large-scale Data & Systems (LSDS) group ( in the Department of Computing. His research focuses on the design and engineering of scalable, reliable and secure large-scale software systems, with a particular interest in performance, data management and security issues. He has published papers in premier international venues, including SIGMOD, VLDB, OSDI, USENIX ATC, EuroSys, SoCC, ICDCS, CCS, CoNEXT, NSDI, and Middleware. Before joining Imperial College London, he was a post-doctoral fellow at Harvard University. He holds PhD and MA degrees from the University of Cambridge.

Friday May 17, 2019
Start: 17.05.2019 12:00

Friday, 17. May 2019, 12:00-13:00 in CAB E 72

Speaker: Tim Kraska (MIT)

Title: Towards Learned Algorithms, Data Structures, and Systems




All systems and applications are composed from basic data structures and algorithms, such as index structures, priority queues, and sorting algorithms. Most of these primitives have been around since the early beginnings of computer science (CS) and form the basis of every CS intro lecture. Yet, we might soon face an inflection point: recent results show that machine learning has the potential to alter the way those primitives or systems at large are implemented in order to provide optimal performance for specific applications. In this talk, I will provide an overview on how machine learning is changing the way we build systems and outline different ways to build learned algorithms and data structures to achieve “instance-optimality” with a particular focus on data management systems.

Short Bio:

Tim Kraska is an Associate Professor of Electrical Engineering and Computer Science in MIT's Computer Science and Artificial Intelligence Laboratory and co-director of the Data System and AI Lab at MIT (DSAIL@CSAIL). Currently, his research focuses on building systems for machine learning, and using machine learning for systems. Before joining MIT, Tim was an Assistant Professor at Brown, spent time at Google Brain, and was a PostDoc in the AMPLab at UC Berkeley after he got his PhD from ETH Zurich. Tim is a 2017 Alfred P. Sloan Research Fellow in computer science and received several awards including the 2018 VLDB Early Career Research Contribution Award, the 2017 VMware Systems Research Award


Monday May 20, 2019
Start: 20.05.2019 17:00

HG D22

 David Sidler - PhD Defense: In-Network Data Processing using FPGAs

Friday May 24, 2019
Friday May 31, 2019
Start: 31.05.2019 12:15

CAB E 72

Lunch Semiar Talk by Jansen Zhao

Title: A brief introduction to quantum computing with plausible applications to machine learning


I will give a briefly introduction to the main concepts in quantum information and quantum computing, and review the basic set of quantum algorithmic primitives. I will then show, by the example of Gaussian processes, how these quantum building blocks can be combined and provide computational speedup in machine learning. We will discuss the practical utility of these quantum algorithms and explore the domain of anticipated near-term applications of quantum computing.

Monday July 08, 2019
Start: 08.07.2019 18:00

HG D 22

Zaheer Chothia- PhD Defense

Title: Explaining, Measuring and Predicting Effects in Layered Systems


• Prof. Dr. Timothy Roscoe

• Prof. Dr. Gustavo Alonso

• Prof. Dr. Rodrigo Fonseca (Brown University)

Thursday July 11, 2019
Start: 11.07.2019 10:00

Thursday, 11. July 2019, 10:00-11:00 in CAB E 72

Speaker: Boris Grot (University of Edinburgh)

Title: Scale-Out ccNUMA: Embracing Skew in Distributed Key-Value Stores







Key-value stores (KVS’s) underpin many of today’s cloud services. For scalability and performance, state-of-the-art KVS systems distribute the dataset across a pool of servers, each of which holds a shard of data in memory and serves queries for the data in the shard. An important performance bottleneck that a KVS design must address is the load imbalance caused by skewed popularity distributions, whereby the “hot” items are accessed much more frequently than the rest of the dataset. Despite recent work on skew mitigation, existing approaches are limited in their efficacy when it comes to high-performance in-memory KVS deployments.

In this talk, I will discuss our recent work on skew mitigation for distributed in-memory KVS’s. We embrace popularity skew as a performance opportunity by aggressively caching popular items at all nodes of the KVS. The main challenges for such a design is maintaining the caches consistent while avoiding serialization points that can become a performance bottleneck at high load. I will describe our fully de-centralized caching architecture and the cache-coherence-inspired protocol used to keep the distributed caches consistent. I will also present simple protocol extensions that enable fault tolerance, with applicability beyond skew-tolerant KVS's.


Boris Grot is an Associate Professor in the School of Informatics at the University of Edinburgh. His research seeks to address efficiency bottlenecks and capability shortcomings of processing platforms for data-intensive applications. Boris is a member of the MICRO Hall of Fame and a recipient of various awards for his research, including IEEE Micro Top Pick and the Best Paper Award at HPCA 2019. Boris holds a PhD in Computer Science from The University of Texas at Austin and had spent two years as a post-doctoral researcher at EPFL.


Monday September 16, 2019