It used to be that cameras never lie[1]. We tend to privilege visual content[2], trust what we see, and rely on police cams, mobile recording tools and similar devices to tell us about what is really happening on the streets, in local businesses, and more.

WIRED OPINION

ABOUT

Catherine Brooks (@catfbrooks)[3] is an Associate Professor of Information at the University of Arizona, where she is the associate director of the School of Information and founding director of the Center for Digital Society and Data Studies. She is a Public Voices Fellow with the Op Ed Project.

Take, for example, a viral video[4] that shows a white woman calling the police as black men in Oakland attempt to barbecue. Millions are laughing, and the woman’s image is being used as a meme[5] across the Internet. When a video of a patron threatening café employees[6] for not speaking English went viral, the subject, a New York attorney Aaron Schlossberg, was identified on social media within hours[7]. His office information was shared quickly, comments on review pages and public shaming ensued. The racist lawyer ended up with the attention of mariachis[8] playing music outside of his apartment.

In both these cases, the videos were real, the memes entertaining, and the Twitter storm was deserved. After all, mobile videos and other cams provide transformative new avenues for justice, precisely because they can spread like fire around the world. But this kind of ‘justice’ landscape only works as long as we can trust the videos we see—and faked videos are on the horizon. Often called “deepfakes,” a term coined by a Reddit user for videos that swap porn star faces for

Read more from our friends at Wired.com