Alan Jacobs


#
What we do know is that Facebook, like many social media platforms, is an experiment engine: a machine for making A/B tests and algorithmic adjustments, fueled by our every keystroke. This has been used as a justification for this study, and all studies like it: Why object to this when you are always being messed with? If there is no ‘natural’ News Feed, or search result or trending topic, what difference does it make if you experience A or B?

The difference, for [Edward] Shils and others, comes down to power, deception and autonomy. Academics and medical researchers have spent decades addressing these issues through ethical codes of conduct and review boards, which were created to respond to damaging and inhumane experiments, from the Tuskegee syphilis experiment to Milgram’s electric shocks. These review boards act as checks on the validity and possible harms of a study, with varying degrees of effectiveness, and they seek to establish traditions of ethical research. But what about when platforms are conducting experiments outside of an academic context, in the course of everyday business? How do you develop ethical practices for perpetual experiment engines?

It is a failure of imagination and methodology to claim that it is necessary to experiment on millions of people without their consent. There is no easy answer to this, but we could do worse than begin by asking the questions that Shils struggled with: What kinds of power are at work? What are the dynamics of trust, consent and deception? Who or what is at risk? While academic research is framed in the context of having a wider social responsibility, we can consider the ways the technology sector also has a social responsibility. To date, Silicon Valley has not done well in thinking about its own power and privilege, or what it owes to others. But this is an essential step if platforms are to understand their obligation to the communities of people who provide them with content, value and meaning.