Events

Select event terms to filter by
Monday September 18, 2017
Monday October 16, 2017
Start: 16.10.2017 11:00

CAB E 72

Talk by Yishai Oltchik (Hebrew University of Jerusalem)

Title: Network Topologies and Inevitable Contention

Abstract:

Network topologies can have significant effect on the execution costs of parallel algorithms due to inter-processor communication. For particular combinations of computations and network topologies, costly network contention may inevitably become a bottleneck, even if algorithms are optimally designed so that each processor communicates as little as possible. We obtain novel contention lower bounds that are functions of the network and the computation graph parameters. For several combinations of fundamental computations and common network topologies, our new analysis improves upon previous per-processor lower bounds which only specify the number of words communicated by the busiest individual processor. We consider torus and mesh topologies, universal fat-trees, and hypercubes; algorithms covered include classical matrix multiplication and direct numerical linear algebra, fast matrix multiplication algorithms, programs that reference arrays, N-body computations, and the FFT. For example, we show that fast matrix multiplication algorithms (e.g., Strassen’s) running on a 3D torus will suffer from contention bottlenecks. On the other hand, this network is likely sufficient for a classical matrix multiplication algorithm. Our new lower bounds are matched by existing algorithms only in very few cases, leaving many open problems for network and algorithmic design.

Biography:

Yishai Oltchik is currently a MSc student at the Hebrew University of Jerusalem under the supervision of Oded Schwartz, researching parallel computation and HPC. His primary research interests are in the fields of network topologies and parallel algorithms.

Thursday November 09, 2017
Start: 09.11.2017 10:00

CAB E72

Brian Gold (Pure Storage)

Title: Accelerating scale-out file systems with hardware/software co-design

Abstract:

Modern file systems can be viewed as specialized database applications, enabling features such as snapshots, compression, replication, and more. As data volumes and performance demands continue to grow, file-system designers have turned to scale-out architectures and, therefore, suffer the joys and pains of distributed database systems. In this talk we will describe several of the key insights behind Pure Storage's FlashBlade, a scale-out file and object storage system that achieves scalability and performance through deep hardware/software co-design.

Bio:

Brian Gold is an engineering director at Pure Storage and part of the founding team for FlashBlade, Pure’s scale-out, all-flash file and object storage platform. He’s contributed to nearly every part of the FlashBlade architecture and development from inception to production, but fortunately most of his code has been rewritten by others. Brian received a PhD from Carnegie Mellon University, focusing on computer architecture and resilient computing. ===========

Start: 09.11.2017 19:00

LLVM Compiler and Code Generation Social

 

Details and registration at: www.meetup.com/llvm-compiler-and-code-generation-socials-zurich/events/242409252/

Friday November 24, 2017
Start: 24.11.2017 12:15

 CAB E 72

Sandhya Dwarkadas (University of Rochester/invited professor at EPFL) 

Title: Performance Isolation on Modern Multi-Socket Systems

Abstract:  

Recognizing that applications are rarely executed in isolation today, I will discuss some practical challenges in making best use of available hardware and our approach to addressing these challenges. I will describe two independent and complementary control mechanisms using low-overhead hardware performance counters that we have developed: a sharing- and resource-aware mapper (SAM) to effect task placement with the goal of localizing shared data communication and minimizing resource contention based on the offered load; and an application parallelism manager (MAP) that controls the offered load with the goal of improving system parallel efficiency. Our results emphasize the need for low-overhead monitoring of application behavior under changing environmental condititons in order to adapt to environment and application behavior changes. If time permits, I will also outline additional work on memory management design that eliminates address translation redundancy via appropriate sharing.

Bio:

Sandhya Dwarkadas is the Albert Arendt Hopeman Professor and Chair of Computer Science at the University of Rochester, with a secondary appointment in Electrical and Computer Engineering. She is currently on sabbatical as an invited professor at EPFL. She was named an IEEE fellow in 2017 for her contributions to shared memory and reconfigurability. Her research is targeted at both the hardware and software layers of computing systems and especially at the boundary, with a particular focus on the challenges of making coordination and communication efficient in parallel and distributed systems. She is co-inventor on 12 granted U.S. patents. She was program chair for ASPLOS (International Conference on Architectural Support for Programming Languages and Operating Systems) 2015. She is currently a board member on Computing Research Association's Committee on the Status of Women in Computing Research (CRA-W).

URL: http://www.cs.rochester.edu/u/sandhya

Monday December 04, 2017
Start: 04.12.2017 16:00

HG D22

Gerd Zellweger - PhD Defense

 Title: On the Construction of Dynamic and Adaptive Operating Systems

Committee:

  • Timothy Roscoe
  • Gustavo Alonso
  • Jonathan Appavoo (Boston University)

 

Friday December 08, 2017
Start: 08.12.2017 12:15

Room -  to be announced...

Kaveh Razavi (Vrije Universiteit Amsterdam)

Title: The Sad State of Software on Unreliable and Leaky Hardware

Abstract:

Hardware that we use today is unreliable and leaky. Bit flips plague a substantial part of the memory hardware that we use today and there are a variety of side channels that leak sensitive information about the system. In this talk, I will briefly talk about how we turned Rowhammer bit flips into practical exploitation vectors compromising browsers, clouds and mobile phones. I will then talk about a new side-channel attack that uses the traces that the memory management unit of the processor leaves in its data/instruction caches to derandomize secret pointers from JavaScript. This attack is very powerful: it breaks address-space layout randomization (ASLR) in the browser on all the 22 modern CPU architectures that we tried in only tens of seconds and it is not easy to fix. It is time to rethink our reliance on ASLR as a basic security mechanism in sandboxed environments such as JavaScript.

Bio:

Kaveh Razavi is starting as an assistant professor in the VUSec group of Vrije Universiteit Amsterdam next year. Besides building systems, he is currently mostly interested in the security implications of unreliable and leaky general-purpose hardware. He regularly publishes at top systems and systems security venues and his research has won multiple industry and academic awards including different Pwnies and the CSAW applied best research paper. In the past, he has built network and storage stacks for rack-scale computers at Microsoft Research (2014-2015), worked on the scalability issues of cloud virtual machines for his PhD (2012-2015) and hacked on Barrelfish as a master student (2010-2011)!

Friday December 15, 2017
Start: 15.12.2017 12:00

CAB E 72

Kevin Chang (Carnegie Mellon University)

Title: Understanding and Improving the Latency of DRAM-Based Memory Systems

Abstract:

Over the past two decades, the storage capacity and access bandwidth of main memory have improved tremendously, by 128x and 20x, respectively. These improvements are mainly due to the continuous technology scaling of DRAM (dynamic random-access memory), which has been used as the physical substrate for main memory. In stark contrast with capacity and bandwidth, DRAM latency has remained almost constant, reducing by only 1.3x in the same time frame. Therefore, long DRAM latency continues to be a critical performance bottleneck in modern systems. Increasing core counts, and the emergence of increasingly more data-intensive and latency-critical applications further stress the importance of providing low-latency memory accesses. In this talk, we will identify three main problems that contribute significantly to long latency of DRAM accesses. To address these problems, we show that (1) augmenting DRAM chip architecture with simple and low-cost features, and (2) developing a better understanding of manufactured DRAM chips together leads to significant memory latency reduction. Our new proposals significantly improve both system performance and energy efficiency.

Bio:

Kevin Chang is a recent Ph.D. graduate in electrical and computer engineering from Carnegie Mellon University, where he's advised by Prof. Onur Mutlu. He is broadly interested in computer architecture, large-scale systems, and emerging technologies. Specifically, his graduate research focuses on improving performance and energy-efficiency of memory systems. He will join Facebook as a research scientist. He was a recipient of the SRC and Intel fellowship.

Thursday December 21, 2017
Start: 21.12.2017 17:00

Title: TBA

Committee:

  • Gustavo Alonso
  • Timothy Roscoe
  • Torsten Hoefler
  • Ken Eguro (MSR Redmond, USA)  

Room: HG D 22