|go to week of Apr 28, 2013||28||29||30||1||2||3||4|
|go to week of May 5, 2013||5||6||7||8||9||10||11|
|go to week of May 12, 2013||12||13||14||15||16||17||18|
|go to week of May 19, 2013||19||20||21||22||23||24||25|
|go to week of May 26, 2013||26||27||28||29||30||31||1|
Cloud computing has emerged as a dominant platform for delivering online services and resulted in a constantly-growing demand for datacenter performance. Following Moore’s Law, popular belief holds that the number of cores on chip will grow at an exponential rate, leading to a commensurate increase in server performance. However, while demand for cloud infrastructure continues to grow, the semiconductor manufacturing industry has reached physical limits, unable to improve performance due to rising chip power. Continuing to increase performance of the cloud while staying within physical constraints therefore mandates optimizing server efficiency. In this talk, I will summarize our recent work on quantifying the inefficiencies of cloud software on modern processors and describe Proactive Instruction Fetch, a technique to address one of the dominant inefficiencies that we identified. I will then conclude the talk by describing a projection of server architecture in future technologies, where an increasing fraction of the chip is dark silicon that we cannot afford to power.
Mike Ferdman is a Ph.D. candidate at Carnegie Mellon University. His research interests are in the area of computer architecture, with emphasis on the design of server systems. Mike’s primary research objective is to understand the interactions of software and processor microarchitecture to enable the design of high-performance, power-efficient, and compact servers.