Quantcast
Channel: Nick Diakopoulos » computational journalism
Viewing all articles
Browse latest Browse all 25

What’s in a Ranking?

$
0
0

The web is a tangled mess of pages and links. But through the magic of the Google algorithm it becomes a nice and neatly ordered rank of “relevance” to whatever our heart desires. The network may be the architecture of the web, but the human ideology projected on that network is the rank.

Often enough we take rankings at face value; we don’t stop to think about what’s really in a rank. There is tremendous power conferred upon the top N, of anything really, not just search results but colleges, restaurants, or a host of other goods. These are the things that get the most attention and become de facto defaults because they are easier for us to access. In fact we rank all manner of services around us in our communities: schools, hospitals and doctors, even entire neighborhoods. Bloomberg has an entire site dedicated to them. These rankings have implications for a host of decisions we routinely make. Can we trust them to guide us?

Thirty years ago, rankings in the airline reservation systems used by travel agents were regulated by the U.S. government. Such regulation served to limit the ability of operators to “bias travel-agency displays” in a way that would privilege some flights over others. But this regulatory model for reigning in algorithmic power hasn’t been applied in other domains, like search engines. It’s worth asking why not and what that regulation might look like, but it’s also worth thinking about alternatives to regulation that we might employ for mitigating such biases. For instance we might design advanced interfaces that transparently signal the various ways in which a rank and the scores and indices on which it is built are constituted.

Consider an example from the local media, the “Best Neighborhoods” app, published by the Dallas Morning News (shown below). It ranks various neighborhoods according to criteria like the schools, parks, commute, and walkability. The default ranking of “overall” though is unclear: How are these various criteria weighted? And how are the various criteria even defined? What does “walkability” mean in the context of this app? If I am looking to invest in property I might be misled by a simplified algorithm; does it really measure the dimensions that are of most importance? While we can interactively re-rank by any of the individual criteria, many people will only see the default ranking anyway. Other neighborhood rankings, like the one from the New Yorker in 2010, do show the weights, but they’re non-interactive.

neighborhoods

The notion of algorithmic accountability is something I’ve written about here previously. It’s the idea that algorithms are becoming more and more powerful arbiters of our decision making, both in the corporate world and in government. There’s an increasing need for journalists to think critically about how to apply algorithmic accountability to the various rankings that the public encounters in society, including rankings (like neighborhood rankings) that their own news organizations may publish as community resources.

What should the interface be for an online ranking so that it provides a level of transparency to the public? In a recent project with the IEEE, we sought to implement an interface for end-users to interactively re-weight and visualize how their re-weightings affected a ranking. But this is just the start: there is exciting work to do in human-computer interaction and visualization design to determine the most effective ways to expose rankings interactively in ways that are useful to the public, but which also build credibility. How else might we visualize the entire space of weightings and how they affect a ranking in a way that helps the public understand the robustness of those rankings?

When we start thinking about the hegemony of algorithms and their ability to generalize nationally or internationally there are also interesting questions about how to adapt rankings for local communities. Take something like a local school ranking. Rankings by national or state aggregators like GreatSchools may be useful, but they may not reflect how an individual community would choose to weight or even select criteria for inclusion in a ranking. How might we adapt interfaces or rankings so that they can be more responsive to local communities? Are there geographically local feedback processes than might allow rankings to reflect community values? How might we enable democracy or even voting on local ranking algorithms?

In short, this is a call for more reflection on how to be transparent about the data-driven rankings we create for our readers online. There are research challenges here, in human-centered design, in visualization, and in decision sciences that if solved will allow us to build better and more trustworthy experiences for the public served by our journalism. It’s time to break the tyranny of the unequivocal ranking and develop new modes of transparency for these algorithms.


Viewing all articles
Browse latest Browse all 25

Latest Images

Trending Articles





Latest Images