In 2007, an interviewer asked Google’s then CEO, Eric Schmidt, what Google might look like in five years’ time. Schmidt replied, “‘…we will get better at personalization. The goal is to enable Google users to be able to ask questions such as “What shall I do tomorrow?” and “What job shall I take?”’ 1 Schmidt’s pronouncement evokes the scope of Silicon Valley’s fortune-telling ambitions—the remarkably intimate intertwining of seeking, knowing, predicting and nudging envisioned by data-rich businesses. This highly-tailored, future-oriented bent has only thickened since. In recent years, business scholar Shoshana Zuboff has described “surveillance capitalism” as “selling access to the real-time flow of your daily life—your reality—in order to directly influence and modify your behavior for profit.” 2 Zuboff details tech firm innovations that collect “behavioural surplus,” such as Samsung’s Smart TV. This voice-responsive device collects not only voice commands, but all speech in its vicinity (including highly personal information), and sells dialogue chunks on to third parties seeking to target advertisements or perfect voice system algorithms 3. Some legislation has been passed against such practices. However, Zuboff contends that legal constraints will ultimately remain ineffective, so long as “the prediction imperative cracks the whip to drive the hunt for unexplored pieces of talk from daily life.” 4
Eric Schmidt, in Caroline Daniel and Maija Palmer, “Google’s Goal: to Organize your Daily Life,” Financial Times, 22 May, 2007 https://www.ft.com/content/c3e49548-088e-11dc-b11e-000b5df10621 (Accessed 25 February, 2019); and Karl Palmås, “Predicting What You’ll Do Tomorrow: Panspectric Surveillance and the Contemporary Corporation,” Surveillance and Society 8 (3), 2011: 347.
Shoshana Zuboff, “Google as Fortune Teller: The Secrets of Surveillance Capitalism,” Frankfurter Allgemeine Zeitung, 05/03/2016, http://www.faz.net/aktuell/feuilleton/debatten/the-digital-debate/shoshana-zuboff-secrets-of-surveillance-capitalism-14103616.html?printPagedArticle=true#pageIndex_0 .
Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (New York: Public Affairs, 2019), 263-4.
Zuboff, The Age of Surveillance Capitalism, 266.
If Silicon Valley companies seek to predict the future, the best way to predict the future is to shape it—by entering into the flow of daily decisions, and architecting available choices (click here!) for those decisions to seek5. Intervening in available decisions enables companies to evince a certain degree of certainty about their ability to shape the future. Indeed, for Zuboff, the surveillance capitalist prediction imperative is “a challenge to the elemental right to the future tense, which accounts for the individual’s ability to imagine, intend, promise, and construct a future.”6 But what are this prediction imperative’s histories, its grammars of time7, its unexpected nuances? How else might prediction imperatives be aligned—perhaps running counter to surveillance capitalism?
I am thinking of Thaler and Sunstein’s term ‘choice architecture’ here, in Richard Thaler and Cass Sunstein, Nudge: Improving Decisions about Health, Wealth and Happiness (New Haven: Yale University Press, 2008).
Zuboff, The Age of Surveillance Capitalism, 20.
I borrow this term “grammars of time” from Peter Rawlings’ essay “Grammars of Time in Late James” (The Modern Language Review 98(2), 2003: 273-284), in which Rawlings addresses the “burgeoning anteriority” (p. 273) of both America in the post-Civil War period, and of Henry James’ writing, the “classically American predicament that is pondering (prospectively and retrospectively) the foreclosure of choice, the ultimate unavailability of ubiquity, even for natives of the New World, and the evanescent dream of sustaining endless possibility, and the impossibility of doing so” (p. 278).
Today’s prediction imperatives are deeply entangled with histories of risk. Long before online platforms such as Google and Facebook became hegemonic, companies and institutions devised and administered statistical calculations—from actuarial methods for predicting recidivism amongst potential parole grantees, to credit scores, insurance premium calculations, and supply-chain modelling8. These methods often aimed to determine who (or what) might be a safe bet or a grave risk, and to grant benefits (from credit to parole) or take precautions (think insurance or hedging) accordingly. The modern “risk society”, as Ulrich Beck called it, obsessively calculates and manages risk—and in so doing, entrenches the very concept of risk as a mode of engagement with the world9. Given this, it is no surprise that today, subjects’ future behaviours have come to be ranked according to varying levels of trustworthiness and purchasing power, measured according to obscure and often proprietary methodologies. (Who understands exactly why they were given a particular credit score, or how a travel app offered them a different fare for the same flight than it offered a friend?10) Probabilistic predictions aim to maximize gains and mitigate risk, although their widespread application can have unexpected consequences—from entrenching new forms of inequality11, to exacerbating systemic risk12.
See, for instance, Bernard Harcourt, Against Prediction: Profiling, Policing and Punishing in an Actuarial Age (Chicago: University of Chicago Press, 2007); Martha Poon, “From New Deal Institutions to Capital Markets: Commercial Consumer Risk Scores and the Making of Subprime Mortgage Finance,” Accounting, Organizations and Society, Vol. 35, No. 5 (October 27, 2008): 654-674; Michel Feher, Rated Agency: Investee Politics in a Speculative Age (New York: Zone Books, 2018); and Jaron Lanier, Who Owns the Future? (New York: Simon & Schuster, 2014): 71-2.
Ulrich Beck, Risk Society: Towards a New Modernity (London: SAGE, 1992); Anthony Giddens and Christopher Pierson, Making Sense of Modernity: Conversations with Anthony Giddens (Stanford: Stanford University Press, 1998).
See Rafi Mohammed, “How Retailers Use Personalized Prices to Test What You’re Willing to Pay,” Harvard Business Review, 20 October 2017, https://hbr.org/2017/10/how-retailers-use-personalized-prices-to-test-what-youre-willing-to-pay.
Frank Pasquale, The Black Box Society: The Secret Algorithms that Control Money and Information (Cambridge, MA: Harvard University Press, 2015).
Randy Martin, Knowledge, Ltd: Toward a Social Logic of the Derivative (Philadelphia: Temple University Press, 2015).
Statistical calculations of risk, carried out in decades past, were cumbersome. Now, in the so-called age of big data, machine learning algorithms identify, predict and intervene in behavioural patterns. They capitalize on potential synergies between subjects’ and platforms’ desire in real time, pre-empting the risks of non-payment or non-engagement. They architect choices; intervene in daily lives, nudging behaviour into alignment with platform desires; and bet on the success of their interventions. To put it very broadly: from the collection of data on deviant behaviour in the Napoleonic era13, to today’s Samsung Smart TVs, risk calculation and behavioural prediction have become increasingly intertwined, and increasingly actionable in real time.
See Ian Hacking, The Taming of Chance (Cambridge: Cambridge University Press, 1990).
Even so, something in the above description has already left the ‘risk’ narrative behind. It evades the prediction imperative—insofar as prediction (in its modern and recent iterations) is that which calculates and mitigates risk. Aligning subject-desire and platform-desire cannot be carried out solely in the province of prediction; for desire is production, not prediction. Desire progresses by means of found, leaned-into synergies between agents—not through a predetermined image of a ‘what’ that is somehow lacking—which could, in its predetermined-ness, be predicted in advance.
“What shall I do tomorrow?” Karl Palmås has described Google’s ambitions for the personalization of seeking (expressed by Schmidt above) as a perfect exemplar of prediction within the contemporary corporation’s surveillant operations14. Yet Schmidt’s statement—with its attitude of seeking, desire, and a coming-toward-certainty around a particular decision—also speaks to a newly predominant cultural logic that goes well beyond mere prediction. It envisions an oracular mode of address: a fatalistic, future-oriented mode of expression governing how fortune-telling platforms and seeking subjects interface with one another. On the face of it at least, this mode of address conjures up an image of the all-knowing platform’s answering-attitude, which responds to uncertain, decision-fatigued, neoliberal ‘Googling’ subjects, seeking clear and singular paths toward their ‘right’ course of action. The future-oracular is a mode of address bound up with the platform’s expression of its unmitigated and flexibly authoritative capacities for deploying information, as well as its need to activate this authoritativeness through the seeking-subject’s performative act of questioning. Thus, Google and other like-minded companies draw not only from the histories of prediction (qua risk mitigation), but also from the histories of divination: a subset of predictive practices foregrounding the role of ritual acts seeking meaningful relationships to chance.
Palmås, “Predicting What You’ll Do Tomorrow,” 347.
Oracular practices can be traced back to the ancient world—as, indeed, can many other modes of prediction. Perhaps briefly comparing two well-known predictive apparatuses from ancient Greece—the Antikythera mechanism, and the Oracle of Delphi—might help to tease out the distinctions between prediction and divination, which will in turn set the stage for my account of how today’s future-oracular mode of address might differ from that of the ancient Delphic oracle. The Antikythera mechanism was discovered in a shipwreck in 1901, and is thought to have been made between 150 and 100 BC. It is often described as the world’s first analogue computer: a complex, delicately calibrated, multi-geared device, used to predict planetary positions and other astronomical events, and to track the ancient Olympic Games’ four-year cycle. The clock-like Antikythera mechanism predicted planetary occurrences that came around “like clockwork”—with perfect and near-immutable regularity, rendered comprehensible by a gear system nuanced enough to map the delicate patterns at play.
By contrast, the oracles at the Temple of Apollo in Delphi (c. 8th – 4th century B.C.E.) traded in contingency. Supplicants came to the Oracle of Delphi from far and wide, seeking counsel on political and personal decisions. Their questions pertained to unpredictable milieus, and were tinged with curiosity about the best course of action. They prepared their questions with careful reflection, under the guidance of priests. Once prepared, the oracle would inhale intoxicating ethylene vapours emerging from active fissures in the temple floor, and give cryptic answers to the seekers’ questions15. Inscribed on the site, the phrase “gnothi seauton” (“know thyself”) greeted supplicants. An incorrect interpretation of the oracle’s message, based on flawed self-understanding, could bring disaster16.
John R. Hale, Jelle Zeilinga De Boer, Jeffrey P. Chanton and Henry A. Spiller, “Questioning the Delphic Oracle: When science meets religion at this ancient Greek site, the two turn out to be on better terms than scholars had originally thought,” Scientific American (August, 2003): 66-73; Henry A. Spiller, John R. Hale and Jelle Z. De Boer, “The Delphic Oracle: A Multidisciplinary Defense of the Gaseous Vent Theory,” Clinical Toxicology 40(2), 2002: 189-196. Of course, oracular practices proliferated in the ancient world; Delphi is but one example.
King Croesus of Lydia met with such a disaster, after asking the Delphic Oracle if he should go to war against the Persian Empire. The oracle said: "If Croesus goes to war he will destroy a great empire." Interpreting this as an auspicious sign, Croesus went to battle – only to realize that the great empire that was to be destroyed was his own.
Whereas the Antikythera mechanism’s predictions presume a regular unfolding of events over time, the Temple of Apollo in Delphi constructs an intense interface between oracle and supplicant, earth and god. The momentary intermingling of these actors voices contingent futurities of change and chance, the very shape of which hinges on subjects’ self-understanding. But of course, the situation also goes far beyond self-understanding, forming a fatalistic architecture of doubled supplication (the seeker asks the oracle, the oracle asks the gods) from a peculiar admixture of ethylene, temple rocks, mythologies and human actors. Merging human and non-human agencies, the oracle opens decision-time.
Perhaps, through an interest in habit and patterns of behaviour combined with increased computational power, big data analytics begin to merge the Antikythera mechanism and the oracle’s distinct approaches to the future: fashioning their political economies of propensity17 from robustly identified behavioural patterns—consumer tendencies and habits-in-formation unfolding as if according to some complex clockwork—combined with subjects’ attitudes of seeking, Googling, wishing to know. This intermingling of calculation and divination is perhaps akin to what Joshua Ramey describes as a colonization of divination. “At the heart of divination is the practice of relating to chance as an occasion to make meaning,”18 Ramey writes. Yet in our time, chance has been misaligned: a “betrayal of chance is at the heart of neoliberal ideology, which exonerates markets as a form of spontaneous social order based on chance, but disciplines chance to conform to market demands.”19 Markets harness contingency and, indeed, seek to calculate risks in precisely the places that they are least expected to emerge: highly improbable, ‘black swan’ events. This results in what Ramey calls a “securitization of the possible,” a “stranglehold of the status quo on the future” and the “foreclosure of the genuinely unknown.”20 “Such pre-emption,” Ramey writes, “seems to take the form of a divining of the future, but in actuality is simply a kind of blank repetition of the demands of the present. This is a far cry from traditional divinatory practices.”21 Ramey calls for decolonizing divination practices, which have been so ruthlessly reordered by financial predictive apparatuses.
I take the term “political economy of propensity” from Nigel Thrift, “Pass it on: Towards a political economy of propensity,” Emotion, Space and Society 1(2), 2009: 83-96
Joshua Ramey, Politics of Divination: Neoliberal Endgame and the Religion of Contingency (London: Rowman & Littlefield, 2016), 8.
Ramey, Politics of Divination, 9.
Ramey, Politics of Divination, 114.
Ibid.
“What shall I do tomorrow?” Users retrieve many attractive options when they Google something, with the presumed-to-be ‘best’ option appearing at the top of the search results, or first in the autofill suggestions. This offers accelerated pathways to knowledge, action and future behaviour—but, in the very same gesture, also preempts a fuller range of genuinely unforeseen possibilities. Basing suggestions on what platforms might ‘think’ users want, thereby precluding more contingent encounters between subjects and information, enacts something like what Ramey describes as a neoliberal stranglehold on the future. Yet it also progresses by means of concentrating information through acts of questioning— something that the ancient Delphic oracle also enacted. Why, after all, were the oracles seen as bearers of truth? Firstly, as John R. Hale and Jelle Zeilinga De Boer suggest, the oracle’s cryptic phrases could be neither proven nor disproven22. Speaking enigmatically, the oracle can never be wrong; if her advice were wrongly interpreted, surely this would be due to the seeker’s flawed self-understanding. But secondly, because querents travelled from all over the ancient world to consult the Delphic oracle, the priests who managed them learned a great deal about the events of the day by listening to their questions23. Thus, when someone came asking for advice as to, say, whether or not to go to battle with a neighbouring city-state, the priests could make use of the information they had gleaned from other seekers’ questions to inform the offered answers. This distribution of informational power strangely echoes Google’s. As a major gateway connecting questions to answers—governing answerability—the latter amasses yet more information by learning from users’ questions and queries24. Of course, it accumulates all sorts of biases and feedback-fuelled presumptions along the way25. The future-oracular mode of address enacted by the oracles of Delphi and their attendants connects a seeker of counsel with a multivalent body of information, which is then reflected back to the seeker as a prophecy. This scenario consolidates the political power of the priests, the guardians of a centralized information-hub. Though its claims to authority are decidedly different, and it routinely preempts the genuinely unknown, Google’s version of the future-oracular draws out the Delphic propensity to concentrate information through its means of connecting questions and answers—a potential long since embedded in divination practices. Through a tightening of feedback loops26, the ‘Googling’ version of the future-oracular uses information concentration and analysis to encase its subject in self-actualizing self-similarity: a sheath of personalized potential.
The Oracle of Delphi: Secrets Revealed, The History Channel, 2003, [Online]. Available at: https://www.youtube.com/watch?v=i1uQqvopvAg (Accessed: 12 September 2015)
Ibid.
And just as the oracles at Delphi were handsomely paid for their services by seekers bringing lavish gifts and tributes from the city-states, so Google compensates itself richly through selling user information to advertisers and, in turn, selling advertisers the chance to win at keywords auctions through its AdWords program.)
See, for instance, Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York: New York University Press, 2018).
Though it is beyond the scope of this essay to critically engage with this topic, the term hyperstition usefully describes the means through which feedback loops entangle the relationships between present and future, and enable narratives to produce their own realities. See Simon O’Sullivan, “Accelerationism, Hyperstition and Myth-Science,” Cyclops: Journal of Contemporary Theory, Theory of Religion and Experimental Theory, no. 2 (2017), 11-44.
Thus, today’s future-oracular mode of address enacts highly tailored, prophetic claims, seemingly addressed to particular subjects: claims to present not exactly—not entirely—what will happen to a given subject but, rather, what should be carried out: the best path forward, given an array of available propensities and possibilities. As such, it emblematizes something of the tensions between prediction and precarity, self-actualization and soft prohibition that surround decision-fatigued27 neoliberal subjects and their decision-making processes. In the twenty-first century, when (as Michael Marder recently put it) “all the world’s a dump,”28 austerity, shrinking job markets, deteriorating environmental conditions, and tattered social and mental ecologies instantiate systemic uncertainty, which is felt as lived precarity: in homes, friendships, families. For many, trivial, day-to-day choices (which shampoo should I buy? which jeans?) yield dozens, if not hundreds of available options. Yet more significant, life-altering choices (where can I afford to live? would going to college pay off? what job shall I take?) seem to yield fewer and fewer viable options. In light of this proliferation of trivial choice, combined with the foreclosure of significant possibilities, the need to enhance serendipity comes to the fore. A nudge in the ‘right’ direction can economize trivial decision-making29. When it comes to more serious decisions, performing the rite of becoming-networked-profile can produce possible, fertile weak-tie connections in a barren landscape of diminished opportunity. (In a newly entrepreneurial-vocational iteration of Cinderella-like hope, users tweak LinkedIn profiles, hoping to be clicked on by a connection’s connection...) Google and so many other platforms—the serendipity accelerators—easily find their supplicants, even as their regime of accumulation (which tends to massively concentrate wealth) contributes to the very conditions from which these supplicants seek respite.
See Barry Schwartz, “Self-Determination: the Tyranny of Freedom,” American Psychologist 55(1), 2000: 79-88; Barry Schwartz, The Paradox of Choice: Why More is Less (London: Harper Collins e-books, 2004); and Richard Thaler and Cass Sunstein, Nudge: Improving Decisions about Health, Wealth and Happiness (New Haven: Yale University Press, 2008).
Michael Marder, “All the World’s a Dump,” The Philosophical Salon, 10 November, 2018: https://thephilosophicalsalon.com/all-the-worlds-a-dump/.
On nudging, see Richard Thaler and Cass Sunstein, Nudge: Improving Decisions about Health, Wealth and Happiness (New Haven: Yale University Press, 2008).
Many accounts of online and statistical predictive apparatuses, particularly those that accrue around the term ‘algorithmic governmentality,’ argue that the latter “bypasses and avoids any encounter with human reflexive subjects.”30 Instead of revolving around—or indeed addressing—individuals or subjects, online predictive apparatuses transform selves into ‘dividuals’: divided-up subsets of relations, traits and data31. Indeed, when our data is parsed into credit scores, customer profiles, and algorithmically-analysed identities, the apparatuses which have made these distinctions certainly do not apprehend ‘us’ as sovereign subjects. Though they may relentlessly enumerate traits and attribute propensities, they do not conceive of individuality as such. There is a heady debate around whether, how, and to what extent ‘the subject’ is still a viable site from which to stage resistance to the behaviourist, predictive imperatives of surveillance capitalism (and neoliberalism more broadly). Wendy Brown famously laments the loss of individual sovereignty under neoliberalism in Undoing the Demos32. Counter to this, Joshua Ramey has argued that Brown’s argument is afflicted with nostalgia for the sovereign subject. He suggests that a more radical approach might be “dividuating the demos:”33 politicizing, and indeed decolonizing the terrain of the dividual, which the current phase of capitalism has so richly mapped and thoroughly leveraged.
Antoinette Rouvroy, “The End(s) of Critique: Data-Behaviourism vs. Due-Process,” in Mireille Hildebrandt and Katja de Vries, eds, Privacy, Due Process and the Computational Turn: Philosophers of Law Meet Philosophers of Technology (London: Routledge, 2012), 2
Gilles Deleuze, “Postscript on the Societies of Control.” October 59 (Winter, 1992): 3-7.
Wendy Brown, Undoing the Demos: Neoliberalism’s Stealth Revolution (New York: Zone Books, 2015).
Ramey, Politics of Divination, 149.
Although today’s predictive apparatuses may indeed bypass subjects, this by no means renders Googling subjects, Google’s supplicants, irrelevant. One reason to theorize the future-oracular mode of address is that it accounts for the predictive imperative from the perspective of the seeking-subject: the subject who ‘Googles,’ who wishes to know, who seeks alignment with the platform’s ecologies of information and in so doing, transforms the corporation into a verb, an act, an incantation. Given the scope of the seeking-subject’s rhetorical power within scenarios of search—her curiosity becoming the very reason for the business model’s claim to legitimacy—she cannot be written out of the equation so easily. But here, she is not so much an individual, as the embodiment of a very particular seeking attitude: an attitude of answerability34, an attitude that bears the marks of the financial demands of the present, and is thoroughly aligned with surveillance capitalism’s colonization of seeking. Despite its over-determined alignments with the surveillance-capitalist present, the Googling subject’s attitude also carries an openness to transformation that hints at the potential for seeking to be aligned otherwise. To decolonize divination, following Ramey, perhaps involves considering what is to be made of the Googling-subject’s performance of seeking and to recognize that these acts of seeking carry the seeds of another order; another set of alignments of which the current iterations of the future-oracular are but one variety. Perhaps realigning the seeking-subject’s attitude would involve counter-speculating on surveillance capitalism’s politics of certainty. Insofar as surveillance capitalism enacts confidence in its future predictions by actively producing the future it purports to predict, so acts that reorient attitudes of seeking might also enact confidence in their ability to redirect the searching attitude toward other apparatuses, more indeterminate (or differently determinate) alignments.
—
Emily Rosamond is a Canadian artist and writer. She is Lecturer in Visual Cultures and Joint Programme Leader, BA Fine Art and History of Art at Goldsmiths, University of London.
See Mikhail Bakhtin, Art and Answerability: Early Philosophical Essays by M.M. Bakhtin, ed. Michael Holquist and Vadim Liapunov, trans. Vadim Liapunov (Austin: University of Texas Press, 1990).