Blog

In the era of “Big Data”-based science, accessing and sharing of data plays a key role for scientific collaboration and research. The San Diego Supercomputer Center (SDSC) at the University of California, San Diego, has implemented  a new feature of the Globus software that will allow researchers using the Center’s computational and storage resources to easily and securely access and share large data sets with colleagues. ​SDSC is the first supercomputer center in the National Science Foundation’s XSEDE (eXtreme Science and Engineering Discovery Environment) program to offer the new and unique Globus sharing service. 

Blog

To determine how the universe as we know it formed, scientists have come up with an ambitious strategy: just do it all over again.

The exascale — one million trillion calculations per second — is the next landmark in the perpetual race for computing power.

Blog

The general public tends to think of supercomputers as the big brothers of their home computers; a larger, faster and more powerful version of more familiar every devices. But in his talk last week for the Argonne National Laboratory's OutLoud series, CI senior fellow Peter Beckman urged the crowd to think of supercomputers more imaginatively as the real-life version of a common sci-fi device: the time machine.

A modern laptop is faster than the state-of-the-art supercomputer Beckman used at Los Alamos National Laboratory in 1995, he said. That same year, a supercomputer with the computing speed of today's iPad would rank on the Top 500 list of fastest computers in the world. Beyond raw speed, the programming strategies and hardware architectures developed on the room-sized supercomputers of the last 60 years have eventually trickled down to the consumer, as with the multi-core processors and parallel operations that can be found in new laptops.

Blog

Last October, we helped celebrate Petascale Day with a panel on the scientific potential of new supercomputers capable of running more than a thousand trillion floating point operations per second. But the ever-restless high performance computing field is already focused on the next landmark in supercomputing speed, the exascale, more than fifty times faster than the current record holder (Titan, at Oak Ridge National Laboratory). As with the speed of personal computers, supercomputers have been gaining speed and power at a steady rate for decades. But a new article in Science this week suggested that the path to the exascale may not be as smooth as the field has come to expect.

The article, illustrated with an image of a simulated exploding supernova (seen above) by the CI-affiliated Flash Center for Computational Science, details the various barriers facing the transition from petascale to exascale in the United States and abroad. Government funding agencies have yet to throw their full weight behind an exascale development program. Private computer companies are turning their attention away from high performance computing in favor of commercial chips and Big Data. And many experts agree that supercomputers must be made far more energy-efficient before leveling up to the exascale -- under current technology, an exascale computer would use enough electricity to power half a million homes.

Blog

When the New York Times ran their investigative report in September on the massive amount of energy used by data centers, it drew widespread criticism from people within the information technology industry. While nobody involved with the operation or engineering of those data centers denied that they use a lot of resources, many experts took offense at the article's suggestion that the industry wasn't interested in finding solutions. "The assertions made in it essentially paint our engineers and operations people as a bunch of idiots who are putting together rows and rows of boxes on data centers and not caring what this costs to their businesses, nay, to the planet," wrote computer scientist Diego Doval, "And nothing could be further from the truth."

That statement was backed up by a talk given last week by Hewlett Packard Labs Fellow Partha Ranganathan, who told a room of computer science students and researchers about his company's efforts to develop "energy-aware computing." Ranganathan made the argument for more efficient supercomputers and data centers not just on the merits of environmental benefits, but also as an essential hurdle that must be cleared for computing speed to continue the exponential march charted by Moore's Law. As the field hopes to push through the petascale to the exascale and beyond, Ranganathan said that the "power wall" - the energy required for power and cooling -- was becoming a fundamental limit to capacity. So one of the greatest challenges the IT field currently faces is how to deliver faster and faster performance at both low cost and high sustainability.