As social networks, apps, and other websites become more adept at turning user data into personalized experiences, more frequent stories appear that highlight the exciting -- and sometimes disturbing -- implications of that technology for health. From Target predicting a teen's pregnancy before she told her parents to Facebook looking for signs that a user may be suicidal, these case studies open up new questions about how the data trails we leave online can improve human health without crossing the line into violating privacy.
In an opinion piece for WIRED, CI faculty and fellow Samuel Volchenboum writes about the promise and ethical concerns of these new capabilities. On one hand, he argues that social network data, when combined with other sources, could turn the world into "one big clinical trial."
Let’s say a social network has an algorithm that analyzes a user’s activities— things they complain about, articles they share, friends’ posts they like, among other things. The AI could potentially identify a pattern suggesting the presence of a medical condition.
Now imagine being able to link across social networks and also to other available data streams from wearables, sensors, and mobile devices. All of a sudden, the predictive value of these disparate data streams could become very high. For example, posts about headaches and nausea, combined with a gradually decreasing step count on a Fitbit, cell phone GPS data indicating trips to the pharmacy, and typing accuracy demonstrating a slow, almost imperceptible loss of coordination could all portend an ominous condition.
However, under the old Spider-Man credo of "with great power comes great responsibility," Volchenboum emphasizes the need for new forms of consent to protect privacy and a patient's control of their medical information and how it's used. An "opt-in" protocol is likely the best structure to balance the risks and benefits of data-driven medical research and recommendations.
Individuals should be able to opt in to allow providers to collect and track their data for health predictions. Companies would need to carefully determine tracking criteria for specific diseases, and at what point they would notify the user that they are at risk. Once notified, the user would have the option to receive more information or send their data directly to their healthcare provider. For this to work, new data governance and stewardship models will be required, and legal protections for people and their data will become increasingly important.
The people, companies, and organizations that hold private data have a big responsibility. If they're going to use these data to make better predictions about health and disease, then everyone needs to work together to better understand the expectations and responsibilities of all parties. The technical, legal, and social barriers are significant, but the potential for improving people’s health is tremendous.
For more on Volchenboum's work at the CI and the University of Chicago Center for Research Informatics, read the Data-Driven Medicine feature from this spring's Medicine on the Midway magazine.