Uncanny Valley: Being Human in the Age of AI

By Claudia Schmuckli

June 19, 2020

a woman looking at an artwork consisting of black and white headshots with eyes blocked out


Uncanny Valley: Being Human in the Age of AI examines how the current realities of AI and its applications challenge traditional understandings of the human–machine relationship, which have been locked into a discourse of imitation or likeness for centuries. This focus found an enduring expression in the uncanny valley, a metaphor introduced by Japanese robotics engineer Masahiro Mori in 1970 to chart humans comfort level with a machine based on its degree of resemblance. However, AI as we currently experience it, is not so much a problem of physical or intellectual replication but of (mis)representation (and its monetization). 

In less than a decade, the world has been reconfigured by algorithmic psy-ops that turned elections by feeding off of people’s tribal reflexes to propagate nationalism and its system of cogs. Extremist and discriminatory tropes thought to be buried in history reemerged as a partisan body—an ideological Frankenstein floating to the surface of the political landscape like the monstrous islands of plastics in the Pacific Ocean. Even before the 2016 US presidential election and Brexit, it had become clear that this development was driven by statistical machines, today simply called artificial intelligence, or AI. This type of machine intelligence has traditionally been applied to decrease uncertainty in fields as diverse as climate science, healthcare, and gaming—areas that benefit from the prediction of potential risks. But introduced into the social sphere, the predictive mechanisms of AI have instead wreaked havoc and augmented uncertainty and instability. In doing so, they have triggered the resurgence of a (primarily male) “hero” driven by anger and a thirst for recognition—a pseudo Achilles of Greek mythology, yet stripped of the tragic appeal that made him legend. 

This is a development to be concerned about for anyone invested in the ideals of democracy. Paired with a worldwide pandemic it is nothing but alarming to face the consequences of this resurgence at the highest levels of government. Moving forward, the precarity of being physically in the world will inadvertently accelerate the automation of operations and services across industries. Corporate and government investment in AI is poised to grow exponentially to ensure long-term public safety and economic stability. In the absence of a wide-ranging regulatory framework vis-à-vis the development and application of AI, concerns about some of the inherent fallacies of AI systems abound. On the micro-level it is haunted by the “interpretability” or “black-box” problem, meaning the difficulty of explaining how a particular AI has reached its conclusions. On the macro-level, it is burdened with what research scientist Ali Rahimi has termed the “alchemy problem” of the larger field: the predominant reliance on trial and error in the absence of rigorous criteria for choosing one AI architecture over another. AI optimization suffers from a fundamental logical limitation: algorithmic learning is a process based on approximation and is thus fundamentally heuristic. The statistical averaging of data toward a representative medium render it incapable of detecting and predicting semantic complexities or ‘anomalies’. 

Significantly, the deployment of such models in the social sphere, in the form of corporate or institutional applications for the management of people’s private and public lives, often leads to the suppression of social and cultural diversity. As many critics have pointed out, this practice results in a distorted reflection of a prevalent normativity informed by multiple biases that build on one another. Standardized institutional and social values and taxonomies tend to inform training data sets, whether existing or newly created, creating a situation in which the source data at the basis of AI systems does not accurately reflect the population or phenomenon of study. These biases are then translated into new statistical and computational norms that, when employed, amplify existing power structures and inequalities. A growing number of reports have been taking stock of social media’s participation in social and political processes (seen in Facebook’s entanglement with Cambridge Analytica) and charting the efficacy of institutional applications, like the predictive-policing software PredPol.

Despite copious evidence, much of the AI field continues to turn a blind eye to the political and social changes its practices and technologies have engendered. The widespread notion of political innocence that pervades Silicon Valley, coupled with the industry’s just-world credo—the conviction that people tend to get what they deserve—has allowed the myth of technology’s neutrality to persist. Meanwhile, people’s algorithmic optimization is framed in an ideological context of meritocracy supposedly blind to systemic bias. Scholar Fred Turner, author of From Counterculture to Cyberculture: Stewart Brand, the Whole Earth Network, and the Rise of Digital Utopianism (2006), describes the problem as such: “When you . . . build a world that doesn’t take account of differences, but rather tries to neutralize them in a single process, or a single code system, or under a single ethical rubric, what you end up doing is erasing precisely the kinds of differences that need to be negotiated.”

a woman looking at an artwork consisting of black and white headshots with eyes blocked out

Installation view of Trevor Paglen, They took the Faces from the Accused and the Dead... (SD18) in Uncanny Valley: Being Human in the Age of AI at the de Young museum. Courtesy of the artist, Altman Siegel Gallery, San Francisco. Photography by Gary Sexton.

Stephanie Dinkins, Forensic Architecture, Lynn Hershman Leeson, Trevor Paglen, Hito Steyerl, Martine Syms directly address the exclusionary mechanisms of AI. Their tangible polemics redefine the coordinates of the uncanny valley. It is no longer limited to the image of the humanlike robot or “thinking machine.” Instead, it is mapped by the inscrutable calculations of algorithms that are designed to mine and analyze human behavior and project it into tradable futures. It is occupied by our statistical doubles spread across stacks and reflected back to us in a ceaseless visual montage of prompts for social engagement, political advocacy, and e-commerce for the further collection of data. It is defined by the addictive mechanisms of applications that aspire to anticipate humans’ every need while facilitating, and shaping, daily life and social interactions. Social media in particular has become an emotional exoskeleton that draws people to the screen like fireflies. Writer and scholar Shoshana Zuboff has termed this phenomenon “surveillance capitalism,” describing it as “a new economic order that claims human experience as free raw material for hidden commercial practices of extraction, prediction, and sales.”

With human data having surpassed oil as the most valuable resource on Earth, the development of new AI is largely propelled by the profitability equations of the industries and policy makers that increasingly rely on—and fund—it, with the purpose of immediate application. In this context, the uncanny valley of the twenty-first century is emerging as the focus of a new form of frontierism, whose sole aim is an ever deeper and more tailored capturing of our attention—its hidden mechanisms geared toward behavioral design and automation that reconfigure the formation of subjectivities, societies, economies, and ecologies in ways we are only beginning to understand.

Text by Claudia Schmuckli, Curator in Charge of Contemporary Art and Programming; from Beyond the Uncanny Valley: Being Human in the Age of AI, Fine Arts Museums of San Francisco. Available for purchase through the Museums Stores.


Learn more about Uncanny Valley at the de Young museum.

Tagged with

More on Contemporary art