By Patrick Thibodeau
November 16, 2011 09:21 AM ET
Computerworld - SEATTLE -- At the supercomputing conference here, there's an almost obsessive focus on developing an exascale computing system -- one that would be roughly 1,000 times more powerful than any existing system -- before the end of the decade.
In the lives of most people, something that's expected to happen eight or nine years in the future might seem like a long ways away, but here at SC11, it feels as if the end of the decade and the arrival of exascale computing are just around the corner. Part of the push is coming from the U.S. Department of Energy, which will fund the development of these massive systems. The DOE told the industry this summer that it wants an exascale system delivered in the 2019-2020 time frame that won't use more than 20 megawatts of power. The government has been seeking proposals about how to achieve that goal.
To put 20MW in perspective, consider the supercomputer that IBM is building for the DOE's Lawrence Livermore National Laboratory. Expected to be capable of operating at speeds of up to 20 petaflops, it will be one of the largest supercomputers in the world -- and one of the most energy efficient. When it's completely turned on next year, it will still use somewhere in the range of 7 to 8 megawatts of power, according to IBM. An exascale system would have 1,000 petaflops of computing power. (A petaflop is a quadrillion floating-point operations per second.)
"We're in a power-constrained world now," said Steve Scott, CTO of Nvidia's Telsa business. "The performance we can get on a chip is constrained not by the number of transistors we can put on a chip, but rather by the power."
Scott says x86 CPU technology is limited by its overhead processes. Graphics processing units (GPU), in contrast, provide throughput with very little overhead, and use less energy per operation.
***
more:
http://www.computerworld.com/s/article/9221883/Exascale_computing_seen_in_this_decade?taxonomyId=12