Most people think that scientists spend all of their time conducting experiments. But the less glamorous side of science comes after the experiments are done, as scientists laboriously comb through the data their work created. As new technologies make laboratory procedures faster and automatic, more and more of a scientist's time is spent on the often tedious task of analyzing data. In order to accelerate the speed of discovery, use resources more efficiently and avoid burning out graduate students, new ways of automating data analysis need to be found.
Carolyn Phillips, a Computation Institute staff member and postdoctoral fellow at Argonne National Laboratory, presented one solution to this data analysis traffic jam in her talk at the CI on December 14th. Phillips works with scientists studying nanoscale self-assembly, the ability of small, simple molecules to form incredibly complex patterns with no external influence. Many researchers in this realm are using computer simulations to understand how self-assembly works and figure out new ways of harnessing it for use in the design of drugs, materials and cleaner energy sources. But these simulations can produce a flood of data, most of which still needs to be sorted manually and analyzed by slow, distractable humans before the next round of simulations can be run – a problem Phillips sought to fix.