Humans have a visual bias, even hundreds of thousands of years after our pattern recognition skills evolved due to prehistoric habits of hunting and predator avoidance. In a newspaper or a scientific article, a well-designed graphic or picture can often convey information more quickly and efficiently than raw data or a lengthy chunk of text. And as the era of data science is dawning, the interpretative role of visualization is more important than ever. It's hard to even imagine the size of a petabyte of data, much less the complex analysis necessary to extract knowledge from the flood of information within.
Fortunately, scientists and engineers were studying this need for visualization long before Big Data became a buzzword. The Electronic Visualization Laboratory, housed at the University of Illinois at Chicago, has been active in this field long enough to have done special effects work on the original Star Wars. EVL researchers have pioneered methods in computer animation, virtual reality and touchscreen displays, and adapted those technologies for use by scientists in academia and industry. But in EVL director Jason Leigh's talk at the University of Chicago Medical Center on January 29th, the killer app he focused the most on was almost as old as those hunter-gatherer ancestral humans: collaboration.
Leigh's talk was the first Fred Dech Memorial Lecture, in honor of the Computation Institute, University of Chicago Department of Surgery, and former EVL staff member who passed away in 2011. Dech developed computer graphics for use in medical education and helped set up and run a video conferencing system the Department of Surgery used to communicate with doctors and residents at other sites. Speaking in that very same conference room, Leigh appropriately put the focus on technology and collaboration in discussing current EVL projects.
To develop better research tools, EVL scientists first studied the work environments of various people. Leigh showed photos of Al Gore's absurdly cluttered desk, the tapestry of pictures in Pixar's storyboard room and a mystery writer's office with the walls blanketed in post-it notes and maps. The common theme, he said, was that people like to spread their information out, which makes it easy to detect links between disparate data points and work with others on a common problem. The practice is similar to the concept of the "commons," a shared, central space in a town or college campus where people come together to collaborate. The question, Leigh asked, is how can we replicate and amplify these commons with technology?
That thinking was the inspiration for LambdaVision, a giant video wall first developed by EVL in 2004. The 2009 version was constructed as a giant touch screen -- "like an iPad on steroids," Leigh said -- where documents, video, graphics and other media can be intuitively moved, zoomed and otherwise manipulated in true science-fiction film fashion. In a classroom, students can even transmit items from their own laptops up to the video wall at the front, producing an unprecedented phenomenon in the classes Leigh teaches with the screen.
"After classes, the students don't leave," he said. "They stay after and continue their collaboration on the wall."
LambdaVision has been a big success for the EVL, with installations at more than 100 sites: universities, Disney, a center studying microbial oceanography and an agricultural research center in India. But the EVL wasn't content to stop at two dimensions, Leigh said. Instead, they merged the interactive features of Lambdavision with an older project called the CAVE, originally developed in the early 1990's as a room-sized virtual reality environment that used projectors to immerse users in 360 degrees of graphics. It's progeny, CAVE2, instead uses a near-circle of 72 3D flat-screen panels and 4000 times the computing capacity to produce what Leigh called a "hybrid reality environment."
Among Leigh's CAVE2 examples were a 3D panoramic tour of the Luxor ruins in Egypt -- an archeological site too delicate to withstand heavy tourist traffic -- and a six-million atom supercomputer simulation run at Argonne National Laboratory. Like the video walls, CAVE2 is especially well suited for educational purposes, such as teaching surgical trainees about the anatomy of blood vessels in the human brain by actually standing within a high-resolution graphic -- an experience that a laptop screen just wouldn't do justice.
As Professor of Surgery John Alverdy described the experience: "I was standing inside the Circle of Willis, a ring of blood vessels inside the brain. If you can be immersed in this environment, and it's big enough...it completely allows you to contextualize the anatomy to what we do in the operating room."
Despite all the high-powered gadgetry on display, it was reassuring that the EVL technologies were built on relatively simple human psychology -- the urge to physically interact with one's environment and to share the experience with others. Even as big data and advanced computation pushes us towards the future of science, the primitive skills of our visual system remain some of the most powerful tools we possess.
"It's using the power of your eyes to do something that computers aren't very good at," Leigh said.
You can watch a trailer for CAVE2 below.