Recent News
New associate dean interested in helping students realize their potential
August 6, 2024
Hand and Machine Lab researchers showcase work at Hawaii conference
June 13, 2024
Two from School of Engineering to receive local 40 Under 40 awards
April 18, 2024
Making waves: Undergraduate combines computer science skills, love of water for summer internship
April 9, 2024
News Archives
[Colloquium] Exascale Computing and the Role of Co-design
September 9, 2011
Watch Colloquium:
M4V file (677 MB)
- Date: Friday, September 9, 2011
- Time: 12:00 pm — 12:50 pm
- Place: Centennial Engineering Center 1041
Sudip Dosanjh
Sandia National Labs
Achieving a thousand-fold increase in supercomputing technology to reach exascale computing (1018 operations per second) in this decade will revolutionize the way supercomputers are used. Predictive computer simulations will play a critical role in achieving energy security, developing climate change mitigation strategies, lowering CO2 emissions and ensuring a safe and reliable 21st century nuclear stockpile. Scientific discovery, national competitiveness, homeland security and quality of life issues will also greatly benefit from the next leap in supercomputing technology. This dramatic increase in computing power will be driven by a rapid escalation in the parallelism incorporated in microprocessors. The transition from massively parallel architectures to hierarchical systems (hundreds of processor cores per CPU chip) will be as profound and challenging as the change from vector architectures to massively parallel computers that occurred in the early 1990′s. Through a collaborative effort between laboratories and key university and industrial partners, the architectural bottlenecks that limit supercomputer scalability and performance can be overcome. In addition, such an effort will help make petascale computing pervasive by lowering the costs for these systems and dramatically improving their power efficiency.
The U.S. Department of Energy’s strategy for reaching exascale includes:
- Collaborations with the computer industry to identify gaps
- Prioritizing research based on return on investment and risk assessment
- Leveraging existing industry and government investments and extending technology in strategic technology focus areas
- Building sustainable infrastructure with broad market support
- Extending beyond natural evolution of commodity hardware to create new markets
- Creating system building blocks that offer superior price/performance/programmability at all scales (exascale, departmental, and embedded)
- Co-designing hardware, system software and applications
The last element, co-design, is a particularly important area of emphasis. Applications and system software will need to change as architectures evolve during the next decade. At the same time, there is an unprecedented opportunity for the applications and algorithms community to influence future computer architectures. A new co-design methodology is needed to make sure that exascale applications will work effectively on exascale supercomputers.
Bio: Sudip Dosanjh heads the extreme-scale computing group at Sandia which spans architectures, system software, scalable algorithms and disruptive computing technologies. He is also Sandia’s Exascale and platforms lead, co-director of the Alliance for Computing at the Extreme Scale (ACES) and the Science Partnership for Extreme-scale Computing (SPEC), and program manager for Sandia’s Computer Systems and Software Environments (CSSE) effort under DOE’s Advanced Simulation and Computing Program. In partnership with Cray, ACES has developed and is deploying the Cielo Petascale capability platform. He and Jeff Nichols founded the ORNL/Sandia Institute for advanced Architectures and Algorithms. His research interests include computational science, Exascale computing, system simulation and co-design. He has a Ph.D. from U.C. Berkeley in Mechanical Engineering.