Faculty Host NSF Workshop on the Future of Algorithms

From fake news to facial detection, self-driving cars and predictive policing, they all have one thing in common – algorithms. Quickly taking over the decision-making typically performed by people, algorithms are increasingly changing the world around us.    

In early December, two researchers from the Department of Media and Information ran a workshop in Arlington, V.A. to discuss the impact of algorithms on people’s lives. Supported by a grant from the National Science Foundation (NSF), Emilee Rader, a researcher studying how algorithmic systems affect individuals and society, and Rick Wash, a researcher studying computer security, worked with 42 experts to identify ways to better understand, create and evaluate algorithmic systems that people can trust. 


“Think about things like self-driving cars being driven by algorithms,” said Rader. “How do they need to be designed so that people can trust them? How can we know they’re resistant to being attacked by hackers or to manipulation of the data that might make them behave in strange ways? Those are the questions we tried to answer.”

After a call for white papers, the workshop invited qualified individuals from the academic, government and industry sectors to discuss the future of algorithmic decision making. 

“The diversity of the group was particularly important,” said Rader. “Making progress on this issue is challenging, and requires many different kinds of expertise and perspectives.”

Some attendees were working in criminal justice and looking at algorithms for predictive policing or for sentencing, while others had expertise in regulating how algorithms are used to assign credit scores. There was also a group from child services in NYC using algorithms to predict which children were at highest risk and place them effectively with foster families, among others.

Over the course of two days, the attendees spent time covering algorithms and their effect on the world and came up with five high-level themes which they discussed in detail throughout the workshop. The themes included:

  1. The Training of Data Scientists – Many people using an algorithm don’t understand the design decisions that went into it, or how to interpret the output. How are these individuals, who may not have the appropriate training, affecting the world? 
  2. Evidence, Accountability and Transparency – How can we regulate the algorithmic systems and hold them accountable? What information should be provided to end users to help them use the algorithms effectively and understand their impact in the world?
  3. Handling Uncertainty – There are many processes behind collecting the data that feeds the algorithms, developing the algorithms themselves and interpreting the algorithms. Errors are made throughout each phase, so how can we understand these systems better?
  4. What ‘Trust’ Means for Algorithms – Is it similar to what we say we’re doing when we trust another person? What kinds of evidence would a system need to provide to people in order for us to feel like we can trust it?
  5. Workarounds and Feedback Loops – Think Facebook and fake news. What happens when someone uses the algorithm in a way it wasn’t meant to be used in order to achieve outcomes to serve their own ends? This might refer to adversaries or people trying to hack or compromise the system in some way as well as people who find ways to work around the algorithm to get their job done. 

The themes discussed in the workshop are relatively new problems in the world and according to Rader, there is still room to do further research that could potentially impact a lot of people’s lives. 

Now that the workshop is over, Rader and Wash will write a report that will be delivered to the National Science Foundation in the Spring of 2018. 

By Nikki W. O’Meara