Select event terms to filter by
Friday December 08, 2017
Start: 08.12.2017 12:15

CAB G 61

Kaveh Razavi (Vrije Universiteit Amsterdam)

Title: The Sad State of Software on Unreliable and Leaky Hardware


Hardware that we use today is unreliable and leaky. Bit flips plague a substantial part of the memory hardware that we use today and there are a variety of side channels that leak sensitive information about the system. In this talk, I will briefly talk about how we turned Rowhammer bit flips into practical exploitation vectors compromising browsers, clouds and mobile phones. I will then talk about a new side-channel attack that uses the traces that the memory management unit of the processor leaves in its data/instruction caches to derandomize secret pointers from JavaScript. This attack is very powerful: it breaks address-space layout randomization (ASLR) in the browser on all the 22 modern CPU architectures that we tried in only tens of seconds and it is not easy to fix. It is time to rethink our reliance on ASLR as a basic security mechanism in sandboxed environments such as JavaScript.


Kaveh Razavi is starting as an assistant professor in the VUSec group of Vrije Universiteit Amsterdam next year. Besides building systems, he is currently mostly interested in the security implications of unreliable and leaky general-purpose hardware. He regularly publishes at top systems and systems security venues and his research has won multiple industry and academic awards including different Pwnies and the CSAW applied best research paper. In the past, he has built network and storage stacks for rack-scale computers at Microsoft Research (2014-2015), worked on the scalability issues of cloud virtual machines for his PhD (2012-2015) and hacked on Barrelfish as a master student (2010-2011)!

Monday December 11, 2017
Start: 11.12.2017 16:15

CAB G 61 

Distinguished Computer Science Colloquium:

Subhasish Mitra, Stanford University, California, USA

Transforming Nanodevices into Nanosystems: The N3XT 1,000X

Host: Prof. Onur Mutlu


Coming generations of information technology will process unprecedented amounts of loosely-structured data, including streaming video and audio, natural languages, real-time sensor readings, contextual environments, or even brain signals. The computation demands of these abundant-data applications (e.g., deep learning) far exceed the capabilities of today’s computing systems, and cannot be met by isolated improvements in transistor technologies, memories, or integrated circuit (IC) architectures alone. Transformative nanosystems, which leverage the unique properties of emerging nanotechnologies to create new IC architectures, are required to deliver unprecedented functionality, performance and energy efficiency. However, emerging nanomaterials and nanodevices face major obstacles such as inherent imperfections and variations. Thus, realizing working circuits, let alone transformative nanosystems, has been infeasible. The N3XT (Nano-Engineered Computing Systems Technology) approach overcomes these challenges through recent innovations across the computing stack: (a) new logic devices using nanomaterials such as one-dimensional carbon nanotubes (and two-dimensional semiconductors) for high performance and energy efficiency; (b) high-density non-volatile resistive and magnetic memories; (c) ultra-dense (e.g., monolithic) three-dimensional integration of thin layers of logic and memory with fine-grained connectivity; (d) new IC architectures for computation immersed in memory; and, (e) new materials technologies and their integration for efficient heat removal. N3XT hardware prototypes represent leading examples of transforming the basic science of nanomaterials and nanodevices into actual nanosystems. Compared to conventional (2D) systems, N3XT architectures promise to improve the energy efficiency of abundant-data applications significantly, in the range of three orders of magnitude. Such massive benefits enable new frontiers of applications for a wide range of computing systems, from embedded systems to the cloud.


Subhasish Mitra is Professor of Electrical Engineering and of Computer Science at Stanford University, where he directs the Stanford Robust Systems Group and co-leads the Computation focus area of the Stanford SystemX Alliance. He is also a faculty member of the Stanford Neurosciences Institute. Before joining the Stanford faculty, he was a Principal Engineer at Intel Corporation. Prof. Mitra's research interests range broadly across robust computing, nanosystems, VLSI design, CAD, validation and test, and neurosciences. He, jointly with his students and collaborators, demonstrated the first carbon nanotube computer and the first 3D Nanosystem with computation immersed in memory. These demonstrations received wide-spread recognitions (cover of NATURE, research highlight to the United States Congress by the National Science Foundation, highlight as "important, scientific breakthrough" by the BBC, Economist, EE Times, IEEE Spectrum, MIT Technology Review, National Public Radio, New York Times, Scientific American, Time, Wall Street Journal, Washington Post and numerous others worldwide). His earlier work on X-Compact test compression has been key to cost-effective manufacturing and high-quality testing of a vast majority of electronic systems. X-Compact and its derivatives have been implemented in widely-used commercial Electronic Design Automation tools. Prof. Mitra's honors include the ACM SIGDA/IEEE CEDA Richard Newton Technical Impact Award in Electronic Design Automation (a test of time honor), the Semiconductor Research Corporation's Technical Excellence Award, the Intel Achievement Award (Intel’s highest corporate honor), and the Presidential Early Career Award for Scientists and Engineers from the White House (the highest United States honor for early-career outstanding scientists and engineers). He and his students published several award-winning papers at major venues: IEEE/ACM Design Automation Conference, IEEE International Solid-State Circuits Conference, IEEE International Test Conference, IEEE Transactions on CAD, IEEE VLSI Test Symposium, and the Symposium on VLSI Technology. At Stanford, he has been honored several times by graduating seniors "for being important to them during their time at Stanford." Prof. Mitra served on the Defense Advanced Research Projects Agency's (DARPA) Information Science and Technology Board as an invited member. He is a Fellow of the ACM and the IEEE.

Friday December 15, 2017
Start: 15.12.2017 12:00

CAB E 72

Kevin Chang (Carnegie Mellon University)

Title: Understanding and Improving the Latency of DRAM-Based Memory Systems


Over the past two decades, the storage capacity and access bandwidth of main memory have improved tremendously, by 128x and 20x, respectively. These improvements are mainly due to the continuous technology scaling of DRAM (dynamic random-access memory), which has been used as the physical substrate for main memory. In stark contrast with capacity and bandwidth, DRAM latency has remained almost constant, reducing by only 1.3x in the same time frame. Therefore, long DRAM latency continues to be a critical performance bottleneck in modern systems. Increasing core counts, and the emergence of increasingly more data-intensive and latency-critical applications further stress the importance of providing low-latency memory accesses. In this talk, we will identify three main problems that contribute significantly to long latency of DRAM accesses. To address these problems, we show that (1) augmenting DRAM chip architecture with simple and low-cost features, and (2) developing a better understanding of manufactured DRAM chips together leads to significant memory latency reduction. Our new proposals significantly improve both system performance and energy efficiency.


Kevin Chang is a recent Ph.D. graduate in electrical and computer engineering from Carnegie Mellon University, where he's advised by Prof. Onur Mutlu. He is broadly interested in computer architecture, large-scale systems, and emerging technologies. Specifically, his graduate research focuses on improving performance and energy-efficiency of memory systems. He will join Facebook as a research scientist. He was a recipient of the SRC and Intel fellowship.

Thursday December 21, 2017
Start: 21.12.2017 17:00

Title: Building Distributed Storage with Specialized Hardware


  • Gustavo Alonso
  • Timothy Roscoe
  • Torsten Hoefler
  • Ken Eguro (MSR Redmond, USA)  

Room: HG D 22

Sunday January 21, 2018
Start: 21.01.2018 00:00
End: 21.01.2018 00:00
Monday February 19, 2018
Start: 19.02.2018 00:00
Thursday February 22, 2018
Start: 22.02.2018 11:00


COMPASS: Computing Platforms Seminar Series

CAB E 72

Speaker : Ioannis Koltsidas (IBM Research Zurich)

System software for commodity solid-state storage






The high-performance storage landscape is being shaped by three main developments: a) Flash memories are scaling to extreme densities (e.g., 3D-TLC, QLC), b) new storage devices offer single-digit microsecond latencies (e.g., SSDs based on 3D-Xpoint memory), c) new standards provide high-performance, efficient access to local (e.g., NVMe) and remote storage (e.g., NVMeoF).

In this talk we present our work on building systems to maximize the benefits of new technologies, targeting commodity hardware environments such as cloud datacenters. Specifically, we focus on: a) Improving performance and endurance of low-cost Flash via a host translation layer, and b) exploiting low-latency NVM devices to reduce the cost and increase the scalability of systems that would otherwise rely on large amounts of DRAM.

Key ingredients in our stack include a storage virtualization layer, an efficient Key-Value storage engine built specifically for the new types of media, and a novel task-based I/O runtime system that enables CPU-efficient, high performance access to storage in a programmer-friendly way. We present an overview of these technologies along with lessons learned while building them, as well as experimental evidence that demonstrate their applicability.


Ioannis (Yannis) Koltsidas is a Research Staff Member in the Cloud Computing Infrastructure department at the IBM Research Lab in Zurich, Switzerland. In his current role he is leading a team of researchers doing research on next-generation Flash-enabled storage systems, exploitation of Flash memory in host servers, as well as applications of Storage-class Memories, such as Phase-Change Memory. His interests also include distributed scale-out file storage (GPFS, HDFS) and extensions thereof based on open-format magnetic tape. Some of the latest projects he has been involved in include the IBM FlashSystem, the IBM Easy Tier Server for the DS8000 series and the IBM LTFS Enterprise Edition.

Previously, Ioannis received his PhD in Computer Science from the University of Edinburgh, where he was a member of the Database Group at the School of Informatics. His research was supervised by Prof. Stratis Viglas. The focus of his thesis, titled "Flashing Up The Storage Hierarchy", was on database systems and data-intensive systems in general that employ novel storage media, such as NAND Flash SSDs and use novel algorithms and data structures to boost I/O performance. Prior to that, Ioannis completed his undergraduate studies at the Electrical and Computer Engineering Department of the National Techinical University of Athens (NTUA) in Athens, Greece, where he majored in Computer Science. =========


Friday February 23, 2018
Start: 23.02.2018 12:15

Lunch Seminar - Spring 2018

Thursday March 01, 2018
Start: 01.03.2018 16:00


COMPASS: Computing Platforms Seminar Series

CAB E 72

Speaker: Saughata Ghose, Carnegie Mellon University

Title: How Safe Is Your Storage? A Look at the Reliability and Vulnerability of Modern Solid-State Drives






We live in an increasingly data-driven world, where we process and store a much greater amount of data, and we need to reliably keep this data around for a very long time. Today, solid-state drives (SSDs) made of NAND flash memory have become a popular choice for storage, as SSDs offer high storage density and high performance at a low cost. To keep up with consumer demand, manufacturers have been using a number of techniques to increase the density of SSDs. Unfortunately, this density scaling introduces new types of errors that can seriously affect the reliability of the data, and in turn significantly reduce the lifetime of the SSD.

In this talk, I will cover several issues that we have found which affect data reliability and vulnerability on modern SSDs available on the market today. I will explore two such issues in depth, along with solutions we have developed to mitigate or eliminate these issues. First, I will discuss read disturb errors, where reading one piece of data from an SSD can introduce errors into unread pieces of data. Second, I will discuss program interference errors, where writing one piece of data to an SSD can introduce errors both into other pieces of data and to data that has yet to be written. Notably, our findings show that the predominant solution adopted by industry to mitigate program interference actually introduces other interference errors, and exposes security exploits that can be used by malicious applications. For both issues, I will discuss solutions that we have developed based on these error types, which can buy back much of the lost lifetime, and which can eliminate the security exploits.


Saugata Ghose is a Systems Scientist in the Department of Electrical and Computer Engineering at Carnegie Mellon University. He received dual B.S. degrees in computer science and in computer engineering from Binghamton University, State University of New York, and the M.S. and Ph.D. degrees from Cornell University, where he was the recipient of the NDSEG Fellowship and the ECE Director’s Ph.D. Teaching Assistant Award. He received the Best Paper Award from the DFRWS-EU conference in 2017, for his work on recovering data from solid-state drives. His current research interests include application- and system-aware memory and storage systems, virtual memory management, architectural solutions for large-scale systems, GPUs, and emerging memory technologies. For more information, see his website at

Friday March 02, 2018
Start: 02.03.2018 10:00

Speaker: Brad Beckmann (AMD Research)

Title: Processor Design for Exascale Computing

Date and Venue: Friday 2nd of March, 2018, at 10:00am, CAB E 72


The US Department of Energy’s exascale computing initiative aims to build supercomputers to solve a wide range of HPC problems, including emerging data science and machine learning problems. The talk will first cover the requirements for exascale computing and highlight various challenges that need to be addressed. The talk will then give an overview of the various technologies that AMD is pursuing to design an Exascale Heterogeneous Processor (EHP), which will serve as the basic building block of an exascale supercomputer. Finally, the talk will conclude by highlighting some of the simulation infrastructure used to evaluate EHP and our effort to open source and share it with the broader research community. Short Bio:

Brad Beckmann has been a member of AMD Research since 2007 and works in Bellevue, WA. Brad completed his PhD degree in the Department of Computer Science at the University of Wisconsin-Madison in 2006 where his doctoral research focused on physical and logical solutions to wire delay in CMP caches. While at AMD Research, he has worked on numerous projects related to memory consistency models, cache coherence, graphics, and on-chip networks. Currently, his primary research focuses on GPU compute solutions and broadening the impact of future AMD Accelerated Processing Unit (APU) servers. Regards, Juan Gómez Luna

Start: 02.03.2018 12:15
Friday March 09, 2018
Start: 09.03.2018 12:15
Wednesday March 14, 2018
Start: 14.03.2018 14:00


COMPASS: Computing Platforms Seminar Series

CAB E 72

Speaker: Eric Sedlar, Oracle Labs


Title: Why Systems Research Needs Social Science Added to the Computer Science






Computer scientists are very good at improving metrics that can be quantified: performance per-core/per-server/per-Watt, scalability, reliability, and are even getting better at a bit fuzzier metrics like accuracy of ML systems. However, it is many of the fuzziest metrics that are driving trends in computing: programmer productivity, usability, cognitive load and/or degree of security provided by a particular system. The biggest trend in computing for the past few decades is the explosion in the use of open-source software in the bulk of computing tasks. This is true even in environments as security-conscious as defense applications, as the stack that needs to execute in an application becomes too complicated for one programmer or one software vendor to comprehend or master. This move to open source software makes most system metrics worse as much of the code run is not optimized for CPU efficiency and may be understood by nobody working for the firm operating the software or its vendors. What is a systems researcher to do in the face of their inconsequential metrics?


As VP & Technical Director of Oracle Labs, Eric manages a team of close to 200 systems researchers and engineers worldwide. In his tenure in the Labs, Eric has started a number of long-term system research projects that have led to technology transfer into products, including the GraalVM programming language runtime, PGX Parallel Graph Analytics, and the Parfait tool for Program Analysis. His personal research interests have been in the field of data processing and the intersection with compiler technologies. Eric was the co-author of the SIGMOD Best Paper in 2009 and has been an inventor on 85 granted patents.

Friday March 16, 2018
Start: 16.03.2018 12:00
Wednesday May 09, 2018
Start: 09.05.2018 14:00


COMPASS: Computing Platforms Seminar Series

CAB E 72

Speaker: Bastian Hossbach (Oracle Labs)


Title: Modern programming languages and code generation in the Oracle Database






In this talk, we will present the Oracle Database Multilingual Engine (MLE). MLE is an experimental feature for the Oracle Database that enables developers to write stored procedures and user-defined functions in modern programming languages such as JavaScript and Python. Special attention was payed to embrace the rich ecosystems of tools and libraries developed for those languages in order to make the developer's experience as familiar as possible. We will show several demos of MLE in action and discuss the challenges of integrating a language runtime with a database system. Under the hood, MLE is powered by the speculative JIT compiler Graal. Having a modern JIT compiler inside a database system not only allows for efficiently running user-defined code, but also for runtime compilation and specialization of SQL expressions and other parts of a query plan to speed up overall query execution.

Short Bio:

Since 2015, Bastian is a researcher at Oracle Labs in Zurich, Switzerland. He is currently working on a high-performance query execution engine for database management systems that is capable of executing query plans combined with user-defined scripts written in a variety of languages (e.g., JavaScript, Python). Bastian received a PhD degree in computer science from the University of Marburg, Germany, in 2015. Prior to Oracle Labs, he has been involved in several projects in the areas of data analytics, data processing and IT security.