-
PDF
- Split View
-
Views
-
Cite
Cite
Graham Denyer Willis, ‘Trust and safety’: exchange, protection and the digital market–fortress in platform capitalism, Socio-Economic Review, Volume 21, Issue 4, October 2023, Pages 1877–1895, https://doi.org/10.1093/ser/mwad003
- Share Icon Share
Abstract
As a space of exchange that transcends historic jurisdictions, the Internet lacks a dominant security provider. This research examines how and why firms in platform capitalism have developed their own method and means of protection. ‘Trust and Safety’ (T&S) is a novel division of labour that has become nearly ubiquitous in the industry, encompassing many of the concerns raised about the influence of technology on politics, society and economy. The making of capitalist exchange online requires a similarly discrete making of protection beyond the state. T&S work ensures that interaction and exchange are sheltered, doing so by curating trust between atomized individuals, and asserting what belongs and what does not. T&S’ centrality to platform capitalism can be explained—in part—by Weber’s notion of the city as market–fortress; a spatialized condition of exchange protection that is foundational for fastening capitalism together and seeking to make it, and inequality, inevitable.
‘Are we responsible for defining who a terrorist is?’ asks a woman seated to my left, rhetorically but nervously. She is unnerved by a recent event in Charlottesville, Virginia, where a driver ran down a group of protesters, killing and injuring 19 in the process. We are in a break-out group, discussing how technology firms can defend their value. I am surrounded by young software engineers—none above 40 by my eye—in an office building adjacent to the Tenderloin, San Francisco. We are here in the ‘Risk Room’,1 a community of employees from many of the world’s largest technology firms with bi-monthly meetups.2 She, like the others, is concerned with what might have happened if this driver held their bitcoin, used one of their chatrooms, or regularly rented through their room-sharing service. Everyone is here because they ask different versions of the same question every day, how can I protect my company from threats and keep users safe? All are part of a new kind of institution known widely in Silicon Valley as ‘Trust and Safety’. All are concerned that the presence of threats, and an unsafe environment for users, will ultimately destroy their bottom line. Perhaps overnight. Being oblivious to a terrorist is just one example of a limitless horizon of bad possibilities.
The influence of technology on politics, society and economy is unavoidable today. From Twitter’s ban on former President Trump to ongoing concerns about secretive political meddling and targeting on Facebook and other platforms, most now agree that ubiquitous technology use raises crucial questions about ongoing transformations in capitalism and its shifting social relations. Scholars have many answers, including by arguing for a recognition of private governance of free speech (Klonick, 2017), algorithmic regulation and domination (O'Neil, 2016; Noble, 2018; Yeung, 2018), unhindered control of data and its summary removal (Gillespie, 2018), the ethics of artificial intelligence and machine learning (Lee, 2018), and the racialization of digital automation (Benjamin, 2019). Very important as each of these are, these explanations address parts of a problem of greater magnitude: how online companies accumulate and protect their value by ‘governing through data’ (Johns, 2021), and with governance that is relational but entirely undemocratic (Viljoen, 2021) in a moment of tightening monopolization. While there is burgeoning recognition ‘of the forces that concentrate internet audiences (Hindman, 2018, p. 6)’, scholars have yet to rigorously consider how online exchange is made possible by both facilitation and coercion amidst limited regulation and weak state governance of online space.
In an online space that transcends state jurisdictions, with new opportunities for rapid accumulation, novel marketplaces and threats taking aim at vast profits and influence, but otherwise without a dominant security provider, companies founded on the Internet build their own means to protect themselves—in part, by delineating space, authorizing exchange and fortifying what they accrue in revenue, brand and indistinct ‘value’.
To address this concern I undertook ethnographic and qualitative research3 along three lines. First, I carried out 64 interviews with technology workers in the Bay Area of California who work on the corporate fixtures that are now widely called ‘Trust and Safety’ or ‘Community Standards’ teams. Secondly, I obtained participant observation access to a cross-company forum—the ‘Risk Room’—an invite-only and Chatham House rules group that met on a bi-monthly basis, composed of workers of almost every large post-2000 technology firm based in the Bay Area. Thirdly, I carried out ethnographic work as an online ‘content moderator’ for a Bay Area company that sorts and labels real-time data from a social media network.
I show that T&S work—besides being historically novel—is indispensable to the governance of digital space and to the changing nature of capitalism. As an incipient division of labour, T&S work gives norms and bounds to a rapidly evolving system of expansive online economic exchange in two ways: (a) by quietly facilitating exchange between users and/or users and companies, while, (b) using data to quietly predict and exclude real and perceived threats. This work is also subtly coercive and intentionally opaque, intent on improving accumulation by quietly curating the user experience (UX). To do so, it advances techniques of identity verification to assert who must be let in the gatehouse and employs metrics of user trustworthiness in the marketplace for two ends: (a) to increase return visitors, who generate data that can be gathered, aggregated and recategorized and; (b) to incentivize purchases or revenue-generating interactions between users or between users and the platform.
These same categories, and the data that circumscribe them, are concurrently used to define and act upon real and imagined threats to their key tenets, (a) and (b). Threats are myriad and almost without bounds, ranging from potential government regulation, to eliminating violent and sexual imagery, mitigating charge-back schemes, identifying 'terrorists' and seeking the ever-present but ill-defined figure widely spoken of as the ‘bad actor’. Moreover, it must predict when something unknown and bad might happen, and why, which often gets boiled down to ‘intent’.
Governance of markets under capitalism requires techniques of security to enable predictable exchange, safeguard surpluses and maintain inequality. To make sense of this problem in a novel mode of production I return to Max Weber’s (1962) understanding of the city as ‘a fusion of fortress and marketplace’; or, has he put it, an ‘area’, an ‘authority’ and an ‘economy’ (80) bound in hierarchical relations of political–economy. Here, though, the existence of a marketplace as a spatial economy and density of social relations is created, curated and protected by a terrestrial authority that presides over predictable exchange, deriving revenue from it and protecting it to ensure continued benefit. With the ‘digital market-fortress’ that I describe, a differentiation is key: The distinction that Weber made between politics and economy is elided, and yet is nonetheless in keeping with contemporary capitalist social relations.
To argue that platform capitalism requires reformatting the meaning and doing of trust, and that this requires techniques of protection, I describe three patterns. First, platforms fail by attending to either user trust or security. Routine exchange with impersonal trust is possible when protection shelters it. Secondly, T&S work uses the outlier to re-assert the inside, and to border it. Thirdly, ever finer techniques are premised on parsing the ‘bad actor’ at the gatehouse, allowing fluidity for all else. Knowing who is not a bad actor enables a global network of market–fortresses increasingly focussed on trustworthy digital identification. All of this work is crucial to fastening together a kind of exchange that is far from inevitable, while protecting a rescaling of capitalism in a moment of vertiginous accumulation and growing inequality. In other words, this work contributes three-fold by (a) revealing a synthesized theorization of platform capitalism as dependent on an economy of protection, (b) by introducing and describing the significance of T&S as a historically novel but now structurally indispensable sphere of global work for keeping this mode of capitalism together and (c) by conceptualizing the socio-spatiality of curated trust to networked protection and the platform economy.
1. Trusting ‘the marketplace’ and protecting it
Capitalist exchange online is increasingly, and rightly, seen as transcending existing political jurisdictions and spheres of authority, while dramatically reorganizing the global economy (Khan, 2017; Srnicek, 2017; West, 2019; Kenney et al., 2021). Under limited, uncertain or patchwork regulation (Collier et al., 2018), with an ability to evade tax regimes, and beholding technological knowledge beyond the capacity of states to gauge, online companies have created unavoidably distinct spaces of accumulation. These spaces of exchange and capital production are sources of novel surplus creation that drive a new iteration of capitalism commonly described as ‘platforms’ (Rahman and Thelen, 2019). At play is a distinctive kind of ‘platform power’ inhered to companies with substantial economic scale (Rahman and Thelen, 2019) that is nonetheless ‘private power’ (Rahman, 2017). As much as ‘platforms enroll users in a participatory economic culture’ (Langley and Leyshon, 2017, p. 4), they also encircle them within it. This power is neither happenstance, nor undefended. But it is obscure, being rooted in proprietary holdings, algorithmic knowledge and great insularity (Pasquale, 2015; Burrell, 2016; Burrell and Fourcade, 2021).
This ‘everyday defence’ matters, but it is not well understood or theorized. Protecting the wealth accrued in and through these most capitalist of marketplaces is no short order. It requires maintaining absolute fluidity in global networks and creating ever more impenetrable walls. Not that fluidity and walls, in and of themselves, are novel for capitalism. What is in play here are the changing conditions of freedom and unfreedom under digital capitalism ‘as the defining characteristic of a capitalist world-economy’ (Wallerstein, 1976, p. 378), such that the digital ‘space of flows’ (Castells, 2009) is accompanied by ‘spaces of vulnerability’ (Smith, 1996).
This is now especially the case because companies cannot or will not call on the state to enforce order or to punish the unprecedented scale of ne’er-do-wells, partly because in the country most of them call home ‘the federal state is administratively weak but normatively strong’ (Dobbin and Sutton, 1998, p. 441). Moreover, firms are generally loathe to call the cops, preferring to continue to advance their interests without opening questions about the scope of their activities and how revenue is accrued. Not calling on the state, perhaps not trusting it, and recognizing that their scope encapsulates nearly every state in the world—itself a world of distinctions—is another reason why online marketplaces are deliberately discrete about defending their interests. One way or another, this calls for novel terms of protection beyond what states can provide, especially ‘at speed and scale’. Like the threat, and the mode of accumulation, it must be derived from digital data.
Platform capitalism and its marketplace model, which includes places of direct exchange like eBay as much as user-populated platforms like Facebook, YouTube, Airbnb, Uber, Tinder, Coinbase, Reddit, Pinterest, Pornhub and far beyond, have been enabled by a particular kind of US Federal legislation, Section 230 (Flew et al., 2019; Kosseff, 2019). Section 230 of the USA’s 1996 Communications Decency Act absolved user-populated platform companies from responsibility for the content and material hosted by them. This allowed for the proliferation of content from many corners of the world and emboldened a means for people to exchange it. It created two definitive patterns: First, it set a cornerstone for the platform economy and the online marketplace as a distinctive place for interaction and possible exchange. Secondly, it placed an impetus on companies to determine, themselves, what content or material should belong, and to decide what to do when they identify things that do not. In a distinctive way, Section 230 served as much to charter the platform economy as it did to regulate it, while creating an incipient need for industry standards and common practices (Frenken et al., 2020).
But as companies gathered and organized vast amounts of content, material and data, they also encountered a problem associated with what to disallow. For some, this is a discrete problem of content moderation (Kosseff, 2019; Roberts, 2019) and of how Internet content is sanitized (Gray and Suri, 2019). But it goes much further. Companies also needed to decide how to recognize what they understand as ‘bad intent’, such as the bandits at the gates of the marketplace, when one is disguised as someone else within the marketplace, or who might be biding their time unobtrusively until the right moment to make their move. There are many questions germane and in keeping with the kind of data and type of exchange (Boyer, 2021), be they about dating sites, home sharing, pornography or interior design—and different uses of labour, too (Fuchs, 2014; Gray and Suri, 2019). Moreover, these platforms must balance a notion of intellectual property and corporate power—such as that of credit card and payments companies—with what people might actually want to exchange on a peer-to-peer platform: is a Tifany’s ring as suitable to buy and sell as a Tiffany’s ring if someone wants a cheaper option?
Exchange demands protection, in part because it rests on a historically-derived and unequal field of wealth concentration. Protection gives trustworthiness contemporary form and mutual understanding about what can be exchanged and how exchange should happen that bound out questions of history and inequality. Crucial, then, is how capitalist exchange is enabled, protected and governed in a space of active contestation and uncertainty, with an acute prospect of failure and predation. Many have pointed out that the state is not required for exchange to exist in predictable terms (Ellickson, 1991; Strazzari and Kamphuis, 2012). This seems to be particularly the case where a new field of capitalist expansion is in play, requiring distinctive means of asserting order beyond the state (Leeson, 2009; Frymer, 2014). Where states cannot or will not be counted on, capitalist development has seen myriad ways of governing and protecting exchange and accumulation (Milgrom et al., 1990; Casari, 2007). Moreover, scholars have long discussed that discrete territory is not a requisite of private order, trust networks or governance (Tilly, 2005; Lessing and Denyer Willis, 2019). Trust can follow mobile populations, processes, social boundaries and communities across cities, prisons, regions and states (Brenner and Elden, 2009; Biondi, 2016), existing both in the twilight of the state and/or enabled by its repertoires of violence, law and regulation (Scott, 2009; Gambetta, 2011).
But protecting capitalist exchange online is probably different from the British East India company advancing and protecting its king-chartered right to trade and dominate beyond the shores of UK (Phillips and Sharman, 2020). Or from the way that prison inmates find the means to exchange and trust across racial groups without the legitimacy of state law in the cellblock (Skarbek, 2011). Exchange beyond law shares a common problem, however. If there is no mutually recognized authority that can be called on to establish or arbitrate orderly exchange, and exchange is either necessary or acutely beneficial, new means of establishing trust between parties is required as is a means to disaggregate trustworthiness from untrustworthiness and to fortify the distinction.
Of course, centralized governance and authority are not sufficient to establish trustworthy social relations and exchange. A ruler may demand that people buy and sell, set the terms with inflexible law and decree punishment for broken rules. This might be sufficient for exchange in a fixed location where parties cannot migrate to other markets, create new ones or find refuge in black markets or the informal sector. As Farrell (2004) suggests, rulers must minimize their power—or, at least make it appear minimal—to allow for mutual trust between rulers and the ruled, and among the ruled.
Perhaps a concern with exchange online requires reflection on longstanding debates about the nature of trust and social relations, such as whether exchange is embedded in social relations, is social relations and how—or if—trust changes with economic reconfiguration, such as with industrialization or the introduction of new forms of money (Edwards, 1975; Granovetter, 1985; Muldrew, 2016). The rich cross-disciplinary discussion about trustworthy relationships between individuals has much to offer when examining interpersonal transactions and what they constitute. And yet this close and attentive look at the relationality of trust struggles to capture the conditions of market-making, the instrumental creation of interpersonal trust and its exploitation for profit.
Instead, the trust being made by capitalist firms on the Internet is close to what some scholars have cautioned against instrumentalizing. In 1985, Granovetter spoke to two aspects of interest here. First, he held that scholars had overstated the importance of historical shifts in the mode of production and concordant restructurings of trust and social relations. Secondly, he argued that it was insufficient to understand market relations as disembedded from social relations, such that the market could be fostered and advanced by connecting atomized individuals with little common context and sociality, as orthodox economists might have it.
In their attention to T&S, digital firms have a very narrow understanding of what trust is and why it matters. For them, at question is not so much whether the market is or is not embedded in social relations, nor whether it changes as capitalism changes, so much as the fact that these firms operationalize an essentialized idea of how to make trust between people in order to build the digital marketplace upon it. What is crucial to recognize, then, is that it is the task of online firms, like Granovetter’s characterization of orthodox economists, to make trust between atomized actors. This kind of trust—a meticulous project more than an assumption or conceptualization—is actively fostered and curated by firms to make marketplaces in new spaces and ways, and to accelerate and encourage exchange between people that will never encounter or know each other more than as a digital ‘handle’ or an aggregate of curated reviews of their past transactions.
Seeing trust with this kind of hierarchy comes closer to how Shapiro (1987) understands ‘impersonal trust’. Impersonal trust reveals a different relationship with authority, including as a means of social control. As she writes of the ‘guardians of trust’ within capitalist firms, the social relationality of trust is much less important than keeping people transacting under a rubric, where the objective is ‘striving for social connection in economic relations’ and to find what ‘best insures agent fidelity’ (p. 635).
However, it is also insufficient to look too closely at the social relations of people trusting each other to understand why people are congregating, interacting and buying and selling online by the trillions. ‘Helping people transact’ and ‘agent fidelity’ to a marketplace of repeat interactions and surplus-creating exchange are dependent on bounding out naysayers, those who dispute the rules, or who target the accumulation of wealth. Protecting property and people from being defrauded, and from losing trust in atomized interactions is crucial—if not necessary—for exchange in conditions of impersonal trust. This is especially the case since the pillars of trustworthiness with impersonal trust are wafer-thin, and perhaps especially fleeting. Impersonal trust collapses when doubt is amplified and loss is possible or believed possible. External threat and risks have special and outsized power here, even just in spectre.
In other words, trustworthy exchange hangs on authority and proactivity. And yet, this authority cannot be too obvious in the making and enforcement of rules, nor too heavy-handed in punishment such that it might instil doubt in the fairness of the marketplace itself. The relationship between trust in digital exchange and the fortification that enables it begs conceptualization.
In his discussion about the nature of cities, Max Weber (1962, p. 85) twins the ideas of market and fortress to describe a ‘complicated’ relationship between politics and economy that is ‘always decisively important for the composition of the city’. Weber understood this relationship between the political and the economic as symbiotic in the making of urban life; a market requires protection and a protector requires the revenues of the market. This ‘fusion’ of market and fortress that leads to a concentrated population, and maintains it with regular and routine economic interaction, is not happenstance. It derives from a concentration of ‘consumption power’ from the ‘prince’s military household’ with the ‘protection it guaranteed’ for a ‘civil economic population’. Here, a political sphere—in his case a titled household—attracts an economic sphere to ‘procure money revenues…either by taxing commerce or trade or participating in it through capital advances.’
What Weber describes as a crucial partnership between the political and the economic in the early making of cities also delineates an important related-distinction between political authority and feudal economic life constitutive of how and why people come together, settle in density and give exchange a spatial form. His explanation for the emergence of discrete locations of economic exchange unites both trustworthy relations and protection—in other words, the creation of a sheltered and predictable space for routine transactions.
And yet, Weber’s use of an historical retrospective on cities sees an inseparable symbiosis between political authority and economy, foreclosing the possibility that an economic logic or an economic hierarchy could itself fulfil the conditions for marketplace establishment, security and growth. Much different today, however, the global ‘consumption power’ of advanced stock market capitalism, and the nature of digital monopolies that transcend the jurisdiction and influence of states, decentre the importance of a political authority in asserting security over a market. What digital platforms create depends very little on a political sphere—‘the prince’ military household’—to incentivize exchange, to stimulate demand nor to intervene to protect its productive power on an everyday basis. In fact, the T&S work that protects this capitalist marketplace seeks to bind out threats, and to keep the state and the prospect of its intervention in the market at a distance.
Bounding out political authority does not change how the marketplace can be used and regimented to accumulate surplus. To the contrary, in the digital sphere, ‘taxing commerce or trade or participating in it through capital advances’ still takes place, but these revenues are not primarily accrued by a ‘prince’. Rather, revenue defines the private market–fortresses they build, foster and protect, with this revenue creation model as the authority. The making of new spatial markets, and the curation of trust between parties to exchange within the markets, appears to be both distinctly old and peculiarly new. Novel are the method, the means and the spatiality.
2. ‘Clean and well lit’: trust, safety and online space
One novel way to make sense of the expansion and deepening of platform capitalism, beyond discursive plays (Pasquale, 2016) or using data and/or employment by these companies themselves, is to examine its material defences and perceived threats, from within. Ethnography, in particular, can provide insight into the logics of opacity and opacity-making (Christin, 2020)—in this case of what trust means within the technology industry, and not, as more widely thought of, as a question of public trust in technology.
T&S is a now-ubiquitous sphere of empirical technology work across the industry, but especially within firms. As Kenny Shi (2016), a T&S technology worker has described it:
Trust and Safety is a term commonly used on platforms where people interact. It is the foundation to enable unfamiliar or total strangers to treat each other peacefully and fairly. When such platforms involve transactions or trading, they are called marketplaces, such as eBay; but they don’t have to involve money, for example, a dating service needs Trust and Safety as much as a marketplace. Another example is the new Trust and Safety Council at Twitter.
Trust, to some extent, is a perception, but its basis is safety. Only when people feel confident and comfortable about the safety of their presence and activities, in other words, there is no negative implication or loss to themselves, then they trust the platform and other people on the platform.
Trust and Safety is often more policy-driven. Unlike fraud which bears a universal understanding, policies are designed and put in place stragetically [sic]. The policy teams choose and decide on what policies curate the desired culture (and user base) of a platform, as well as bring the maximum business returns.
Most firms over a particular size now have T&S teams. These teams cover a variety of tasks, from proactively mitigating risks to revenue and brand, identifying threats and problematic users, to helping create and curate an environment that is desirable to users. In 2021, on more or less any given day, LinkedIn reports that there are 16 000, ‘Trust and Safety Jobs in the United States’, with greater than 1000 active recruitment in the USA alone. Google has T&S jobs across a variety of categories from ‘Head of Global Incident Management’ to ‘Scaled Abuse Analyst’. For Facebook, jobs range from ‘Data Scientist, Brand and Integrity’ to ‘Safety/Dangerous Organizations Project Manager’. Reddit calls some of their T&S workers ‘Anti-Evil Operations Specialists’. This kind of work also sometimes includes what these workers call ‘compliance’, something particular to state regulation, such as money laundering and Patriot Act law. But T&S work is distinctive from pre-Internet law, both in what it attends to and in its relationship with revenue generation.
Limited glimpses of T&S work have emerged in scholarly literature. This is partly because it has been understood in different lights, as a facet of a larger problem, and in different moments of its technical and normative footprint. As McKnight et al. (2002) put it 20 years ago, speaking of mitigating the threat to early days web commerce, ‘Understanding the nature and antecedents of consumer trust in the web can provide web vendors with a set of manageable, strategic levers to build such trust, which will promote greater acceptance of B2C electronic commerce’ (p. 298, emphasis added). They go on to provide a ‘trust-building model’ that mitigates risk, potential loss and reputational damage. Other work provides key examples of how companies seek to actively forge trust between users, such as by testing the size and shape of profile pictures and measuring the effectiveness of ‘star’ reputation systems (Resnick and Zeckhauser, 2002; Pavlou, 2003; Van der Heijden et al., 2003).
Social scientists have increasingly looked closely at aspects within the ken of T&S, most especially content moderation and its creation of a problematic global division of labour (Gillespie, 2018; Roberts, 2019), user reporting systems (Crawford and Gillespie, 2016), impingements on free speech (Klonick, 2019), lack of accountability (Gorwa, 2019), or the challenge of litigating against the Internet and firms as a global force (Woods, 2018). In her work on online processes governing free speech, Klonick (2017) describes entities like Facebook as ‘the new governors’, while speaking primarily to the absence of legal regulation and the effect of Facebook’s moderation of politics on ‘democratic culture’ (p. 1601).
For its many strengths, this existing research can be much fortified by a better understanding of how such logics have developed, how far they go in systemic terms, and of the ways that these questions are always at once about labour, inequality, surplus generation and their respective maintenance. That they are bound together in a specific kind of everyday work must be a political economy question.
3. Exchange and the demand for safety
It is not obvious that a lively space of online exchange will thrive. John was once a Marine. Now, he leads the trust team for one of the world’s largest social networking sites, associated with employment, with hundreds of millions of users. Two decades ago John began his career at a virtual life company, a website similar to ‘Second Life’—an online world that users curate and act in as though it were primary. In the late 1990s, these virtual life spaces, where users invent and live through an avatar, emerged as spheres of sociality and interaction. These ‘second spaces’ offered users the possibility to create lives that they could not live, to do things that they would not do, and to pretend they were something that they could not be. One of the only rules was that there were few rules. They were deliberately uninhibited and intentionally apolitical, and they were exciting new spaces of sociality and exchange without law. In some ways, these spheres represented what the Internet was in those days, an innocent, unconstrained, and more or less unsecured space. Occasionally, the platform would have to report something to local police.
And yet, John explained how, over a series of years, this platform became vulnerable to problems that police could not deal with. With lax rules, and rules that they were reticent to enforce, people started testing the boundaries. Physical harm to others was not such a big problem; they did not see violence migrating offline as a result. But, increasingly, ‘bad actors’—for John, groups from outside the USA—started to game the system. While all users would have to pay to buy tokens with credit cards, which would allow them to build their avatars and spaces, accumulate and influence others, charge-back schemes began to pollute the experience. Fraudulent credit card purchases, or efforts to make purchases and then seek refunds underhandedly, undermined how users saw each other, and the platform.
When unsuspecting users purchased or traded things with other users that ended up being the product of a scheme—getting them in trouble, too—the problem infected the whole community. The space gained a reputation for fraudsters and unpredictability. Many users fled, migrating to Second Life and other more predictable virtual worlds. Few wanted to put up with the hassle of being declared a criminal, and to prove they were not, much less to having to lose money. Online life was supposed to be an escape.
For John, in retrospect, this was exactly why this particular ‘second life’ space went nowhere as a successful company. They were not fast enough at counteracting threats to the platform, making it a place that users could visit, interact with others and spend time without worrying. And while the company did not fold entirely, it began to limp along with a heavily reduced user base. John speaks of it as a crucial learning experience, worthy of sharing.
Learning what works to keep people online is trial and error. ‘It all started with stamps’, Paul, a former eBay employee told me in 2017, ‘No one had any idea how to know if a stamp was real.’ In the early 2000s, leaders at eBay were faced with a different challenge to the same kind of problem. This new user-populated website of anonymous and geographically dispersed members, which focused on the buying and selling of used or ‘like new’ goods, had to find a way to make potential but wary customers believe in the platform and each other. The opportunity was a new boundary-smashing space for exchange, the Internet, that people could populate themselves. The threat was the tiny bits of ‘small data’, the individual or small group data points that made other people doubt spending their money online. A bad stamp was not just a bad stamp, it was doubt in the platform itself. A bad stamp left unchecked undermined the very feasibility of this incipient global business, and the online model of exchange.
As the platform grew around collectibles—stamps, signed goods, hard-to-verify second-hand bits and pieces—they faced an acute problem of scale. Amidst millions of bytes of garage sale material, seen from 30 000 feet in Silicon Valley, were things that needed to be verified. A sought-after stamp would attract specialized users from disparate places. But people would not bid its value if they could not trust its veracity. Losing out on the stamp market was not a nominal question. Getting it right was the business model.
The team started by deciding that the buyer was more important than the seller. Supply would come, but demand could be scared away. They figured out that the best people to help arbitrate demand were right in front of them—their stamp ‘power sellers’. These sellers were also buyers, and they knew the details of what mattered about stamps. They recruited a group of them, gave them a private online discussion forum, and empowered them as a specialized team able to make decisions about whether a particular stamp or seller was ‘gaming’ the system. They could then use that to sanction or not.
By emphasizing simplicity, decentralizing some decision-making and streamlining the importance of predictability of process and outcome, this group started to mitigate every good reason—and there were many—to not buy from a used car lot where the tires could not be kicked. The emphasis was reproduced in many ways: They created a dispute resolution system for when things went bad, so people did not feel cheated. They designed specifications for images and their dimensions. They created a peer rating system and a user reporting mechanism based on comments. They gave companies with high-value goods—Tiffany’s was one—a back door verification process for their goods, to allow them to detect counterfeits and protect the value of their brand. The online yard sale became ubiquitous, and trustworthy, against the odds.
4. The outlier makes the insider
I am barely removed from ethnographic fieldwork with São Paulo, Brazil’s police detectives, when a police ‘friend’ shares a video from a well-known Facebook group of police officers (Denyer Willis, 2015). The video shows the aftermath of a police shooting, narrowing in on the faces of three young men who lie dying on the street. The video is horrifying, and contains difficult dialogue, including from two police officers:
A voice, from the background: ‘This piece of shit won’t die man.’
A clearer voice: ‘Are you going to take a long time to die, dammit?’
The page where the video was posted also displayed a series of photos of the scene, now obviously a police shooting on the West Side of the city of São Paulo, Brazil. The photos capture the police cars, as well as the faces and precinct numbers of police officers involved. By the time I took a screenshot of it for my own records, the video had been shared more than 700 times. The widespread circulation of the video called for something of a public reckoning in a city where police killed more 750 people that year, and around that many every year since. For many, the momentary visibility of police violence was an abrupt instance of transparency on Brazil’s black genocide and public support for it (Nascimento, 2016).
But despite its revelations about power, within 2 days, the video was removed from Facebook.4 It was not alone. In a brief window between 2012 and 2013, Facebook was awash with these kinds of images and videos posted by individual police or people closely associated with them. One explanation for the removal of this content is that governments requested it. At that time, governments could ask Facebook to take down specific images by making manual requests that were considered on a case-by-case basis by people in Silicon Valley. Back then, too, Facebook still publicly touted its role in the ‘disintermediation of politics’ (Healy, 2017), and spoke of a phobia of being seen as working for governments (Hoffmann et al., 2018). ‘Transparency reports’ from the time revealed that for the 6-month window surrounding the video, they complied with less than one-third of government requests (Facebook, Inc., 2021). Across the whole country of 199 million people, they responded with ‘some data produced’ for authorities in only 215 cases, whether about user profiles, private messages, domestic abuse or removal of content deemed inappropriate.
There is a stronger explanation, in keeping with Facebook’s internal processes. At the time, the platform was heavily dependent on user ‘flags’ to report content (Crawford and Gillespie, 2016). These images would then be reviewed, individually, by content moderators who would judge the content against a set of rules. One former Facebook employee described to me that content moderation work was seen as an undesirable consequence of the exciting work of software engineering; a project of waste disposal caused by unproductive people and side effects. In keeping, the cost and outcomes of this process were being managed by outsourcing to other firms, with labour in Manila and elsewhere (Chen, 2014).
At the time, the approach to this outlier data was rudimentary. Content moderation rules used by Facebook from 2012 consisted of 52 bullet points across nine categories5 that fit on a single PowerPoint slide. Only one government, Turkey, figured in these rules, and they had compelled the company to require that references to Ataturk be banned. Otherwise, the rules were both highly arbitrary and occidental, written in a tone akin to American college drawl, and dramatically skewed towards a fixation on sex and violence. But they are trite, vague and imprecise. For example, as they put it, ‘crushed heads, limbs, etc. are ok as long as no insides are showing’.
‘Moose-knuckles’, ‘camel toes’ and ‘ear wax’ are also described in the document as not being allowed, and to be deleted. And yet the 2012 rules make no mention of political violence, police violence or terrorism outside of requiring that all ‘credible threats’ to Law Enforcement Officers or ‘Heads of State’ be reported to supervisors. For content moderators, decisions on content followed one of three lines: (a) confirm removal from the platform, (b) deny removal or (c) escalate to superiors for possible onward reporting to authorities. Fifteen of the 52 bullet points stipulated escalation to superiors. Otherwise, a crucial point is that a flag was to be ‘confirmed’, with the data removed from the platform and eliminated, as the image of the police killing would have been.
In April 2017, barely 4 years later, I walked into the office of a start-up content moderation firm in the Bay Area. I had found this company online, while doing a thorough examination of the different companies working in the content moderation sphere. An employee, ‘Mark’, agreed to meet at their offices. This company, which I will call Tinkle, had been founded less than a year prior. Their office was full of opened and unopened Dell computer boxes, desks sat empty and parts of the office space looked like a hotel lobby or a conference room.
I learned that Tinkle was indeed a new company, part of a larger one. They had an association with a social media network with several hundred million users and were using real-time data from that platform to build a saleable content moderation product to operate ‘at scale’ and in real-time. As users interacted on the social network the machine learning model would identify suspect data and display it to content moderation workers to decide on how to categorize it.
Tinkle used a machine learning model that, they hoped, would come to recognize different categories of images with a certainty in the high 90 percentiles. With a substantive enough amount of categorized data, and with the machine given the right set of assumptions, it would make ever greater refinements on particular traits in the data, which would be stored and now labelled. The machine would be ‘taught’ by the bumping up against the binary code assigned to data points by human reviewers. Over millions of trial-and-error interactions, Tickle’s machine would trace towards a ‘singular’ perfect understanding of what defines the peculiarity of a category—be it ‘ear wax’, sex, or, a killing by police. Crucial was that they now also had, and were building, a live and labelled database of these nasty things.
Mark told me more about Tinkle’s approach. In designing automated solutions, they wanted to mitigate the cost and delay of human review. Having people look at images is time-consuming, expensive and open to error. It means expanding the time between a user uploading an image or video, which could be as serious as a real-time mass shooting or a rape event, and its display on the platform where it would be seen by users. The problem with a manual review, too, is that it leaves too much room for subjectivity, interpretation and doubt about the content on the platform. One moderator might think that an image is fine while another finds it horrendous, all within the bounds of the categories they are given. Worse, manual review does not work efficiently at scale, where tens of thousands of images are uploaded every second. Tinkle’s vision was to have speed, accuracy and cost-efficiency, minimizing these problems by automating. And if they could get it right, they could sell their solution—or their company—for top dollar.
Tinkle paid their workers from 15 to 75¢ (US) per 1000 images moderated, depending on the source of the data and the machine training task. At their base rate, moderators categorizing one image per second would earn 52¢ per hour to sort images into four categories, ‘clean’, ‘PG-13’, ‘Explicit Violent’ and ‘Explicit Sexual’. The model leaned heavily on workers from places like Bangladesh, Ghana and Albania, paid by PayPal transfer after they had made a minimum amount: $20. To incentivize its workforce, Tinkle later introduced a ludic ‘leader board’ to show money earned by leading moderators. Later they created ‘badges’ to be won for tasks and metrics completed.
For all of these changes, one was fundamental. This model saw outlier data—a video of a police shooting, for example—as valuable, itself now a commodity, waste transformed to product. Moderating images became about a live machine learning model, with instant decisions doubling as coding data, instantaneously fed back to the machine. The machine was proactive, searching and extracting, and premised on an ever greater database of outliers; to be seen by content moderators but, hopefully, never by most users.
5. Bad actor at the gates
Operating on Chatham House Rules, the Risk Room was started by a nucleus of people from a nascent sub-community of technology workers with early experience in T&S. One such meeting was hosted by a small firm in San Francisco proper, in their office that they make inviting by being ‘shoe free’. They offer a free pair of slippers to all visitors. After mulling over pizza and beer, the group sits through a presentation by the host organization, which details how it has been focused on a problem of relevance across platforms. In this case, the company speaks about ensuring that the ‘UX’ is as seamless as possible. Their challenge: how to ensure that a user can log in with the fewest clicks possible without jeopardizing the security of the platform. They want to keep the attention of the user, avoid any frustration, while not allowing an intruder in who might defraud the organization, pilfer data or orchestrate a scheme of some kind. That means knowing—very fast—who is trying to get in the gates, be they good or bad.
There is recent concern that more bad people will get in, disguised as good people. When Equifax, a global and pre-Internet credit agency with feeble data security was the victim of a major hacking event, the fall out was widespread. More than 140 million individuals had their personal data compromised, including the kind of information that has been the traditional bedrock of pre-Internet trustworthiness—credit scores, social security numbers, names, addresses, dates of birth and employment histories. Reputational damage was heavily concentrated on this firm, and acutely on the consumers whose data has been collected and managed by Equifax. But the Equifax hack created a much bigger problem. Not only could archaic Equifax not be trusted as a way to verify people’s identities and trustworthiness for consumption, but all of the data that it had held could no longer be trusted either. Equifax—the event, not the company—was close to a universal problem for all present because it made every single new user suspect.
With the Equifax data now circulating to the highest bidder, and with great fear of it being sold and used inauspiciously, some of the most important trust and verification signals that these companies used were in doubt. Having ‘traditional forms of verification’ floating around the Internet brought into question their usefulness and reliability, sum total. Participants were concerned that this analogue pre-Internet data—crude in its own way, territorially confined, and some of it already easily available online—was increasingly insufficient to verify users, their purchases and their intentions. For those in attendance, the conversation shifted to what constitutes a verifiable identity online—one’s social media history and relationships, public profiles with consistent device id(s) and long-held accounts—and how fast and cheaply they might be checked or included as signals in machine learning models.
The Equifax case implied why cooperation became necessary for each of the participants involved in the discussion. While individual and user facing solutions, like two-factor authentication, one-time passwords, and personal history questions—potentially even asked by a real person on a chat or over the phone—were useful, they all compromised heavily on the UX and threatened to increase costs, breaking two dogmatic considerations. These solutions all increased ‘friction’ and were too heavy-handed, slow, and antiquated to work at scale. Users would be annoyed, with their migration elsewhere threatening the bottom line. ‘We just assumed everyone had the data’, said one person about the afterlife of the Equifax hack.
New data—digital data—has become necessary for companies to identify users and mitigate threats. Users and their identities can be verified through their digital histories, social connections, data use patterns, login methods, password changes, location changes, device language, purchasing history and keystrokes, among thousands of signals that relate different kinds of commonalities—and discontinuities. That digital identities can be verified, across platforms, is of ubiquitous value and worth sharing across firms to construct a systemic condition of trust. Later, in a break-out group, we discuss the problem of verification, and how to know or predict if a user is malicious, given the myriad of means to avoid identity verification. An employee of one of the top five largest companies in the world raises his hand, to mention that his company has decided to make available limited access to its database of billions of user identities and their histories, which other companies can build into their collection of verification signals. This not only helps solve the UX problem. It serves as a kind of bridge between spaces, where a user’s history and identity being recognized by one firm serves as the basis for entry into another—and in so doing, turning the market–fortress into a selective network of connected market–fortresses.
6. The bag of behaviours: considerations and conclusions
‘Take a bag of behaviors’, says a participant in the Risk Room, ‘then look for anomalies’. ‘And use it to predict the future to stop them from doing something bad—or for marketing’, says another. To disaggregate good from bad, T&S has also become about the pursuit of the false positive—the outlier that should not be—at scale. It is a search for patterns in data that do not fit, in order to build complex and iterative automated models to identify and predict what and who does and does not. Taking data and users from across the world, inferring their intent and making decisions, is the means for automating boundaries; teaching machines to infer an inside and an outside, to define it and act in its defence. And, in so doing, too, to reassert dominant patterns of normal behaviour, and the value they represent. In dealing with real-time data, inferring and separating good and bad is an iterative process, at scale, and at the temporality of a nano-second. All of it is done by engendering both supply and demand, to ensure that people keep coming back into digital life, especially to continue interacting and exchanging.
Worrying, though, is the turn to prediction, the proactive, and protection. Companies are turning to additional solutions; taking known instances of fraud, inappropriate images, and retracing the data footsteps that surround them to detail recognizable patterns. This data, now a category of its own, is pedagogical. Taking this ‘bag of (bad) behaviors’ is now a starting point for identifying and asserting boundaries, between, as they say, the ‘real fraudsters and the trolls’ in the full recognition that an error like a false positive is good since it is the basis for continual refinement. ‘Train your model on egregious/obvious behavior’, as one risk room person described it, ‘then use the model to learn more signals or catch more that you didn’t notice.’ Threats, data that do not fit and ‘problematic users’ are now the logic to teach machines to build the walls of fortress, in a balancing act between being getting it wrong, being victimized and predicting the future. ‘Let some bad actors in to learn from them’, said another while discussing coupon and payment fraud. ‘Don’t take down an account, even if you know it is fraud, until it is trying to cost you money’, another echoes.
What one can describe and conceptualize as the digital market–fortress premised on these practices is actually a node in an online sphere of connected digital market–fortresses that is being written and codified by the work of T&S. The project is to ensure not only trust in individual firms, and other atomized individuals, but to ensure trust in the bases of platform capitalism as a distinctive mode of exceptionally impersonal exchange. It is a project that unites platforms as dissimilar as eBay, Tinder, Coinbase, Pornhub, Airbnb, Yelp, Amazon, Square, LinkedIn, Gusto, Google, Facebook, WePay, Twitter, Postmates, Pinterest, Netflix, Upwork, Lyft, Patreon, Adobe and so many others. This common project to make people trust putting their money at risk online forges a new spatial economy while deliberately bounding out the state and its potential intervention.
The prospect of a data breach, massive user flight, a reputation-obliterating event, a lawsuit, or regulation that can implode a firm’s reputation, its shareholder value, or its user base is powerful. The pace of capitalism is unprecedented, and no one wants to become Ashley Madison or Equifax. And yet these cases are themselves demonstrative of what they generate normatively in response as a result: an ever more proactive, dedicated and systemic concern for maintaining and securing the feasibility of the platform economy, and not just individual firms. T&S seeks to fasten together an expanding mode of capitalist production, itself built on shaky foundations of user fidelity and trust that must be ever more discretely constructed and safeguarded, in an effort to make platform capitalism inevitable. The irony is, the sharper the binary between trustworthy and untrustworthy is made, the tighter the concentration, the greater the implication for its rupture, and the more significant that rupture can be.
It is a circular problem that implies a race to the bottom, with an obvious contradiction. The more the ‘bad actor’ and ‘untrustworthiness’ is pursued, the greater their spectre, and the spectre of failure, becomes. Because it is economic power deciding these terms, and not the market playing within the rules of the game defined by the state, firms cannot appeal to a ‘neutral’ arbiter when things go wrong. The logic of law does not apply in the same way for a system created, known and fostered for discrete private benefit. Given the stakes, this kind of protection does not leave much room for negotiation.
Meanwhile, many are concerned with a wider crisis of trust in politics and society. It is not happenstance. A crisis of trust is commonly accompanied by the rise of other iterations of reliability, modes of belonging and ways of trusting. ‘In a society where trust is in short supply and democracy is weak’, Gambetta (2011) once wrote, ‘the mafia sells protection, a guarantee of safe conduct for parties to commercial transactions.’ Within a larger historical moment in capitalist change and an associated wilting of faith in government, I have described an online economy where trust was in short supply, where democracy is both formally and socially absent, and where a guarantee of safe conduct for parties to commercial transactions was subsequently built by global corporations. Perhaps this was also made savoury by virtue of declining levels of trust in government and representative democracy. The existence of exchange online is widely understood to be apolitical, a sphere of interaction that counts very little on the presence of the state, and it might even be seen as more trustworthy because of this. Be not fooled. This sphere is governed and protected, if quietly, and this is a vital reason why it has been possible to advance platform capitalism at dizzying speed across cultures, languages and varied global–ethical conditions. This dramatic expansion is not an obvious outcome, nor is it something that can be explained by arguments of trust as primarily behavioural, cognitive or inter-personal.
This does not mean that users must submit to a regime of trust that is both forging ahead with vertiginous inequality and coding racial differentiation into it. As Farrell (2004) puts it: ‘Extreme disparities of power mean that both the stronger and the weaker actor will have good reasons to distrust each other’ (p. 92). If power is not absolute but limited, those subjected to power have the ability to exert control, influence the techniques of ordering, or alternatively to opt out of the relationship. If they know that they are being subsumed into a regime of trust and hierarchy, that is.
Given its many historical peculiarities, the dominance of platform capitalism in today’s global economy demands a political economy explanation. By attending to how they have managed to grow so large in the absence of a clearly defined system of protection that matches their trans-jurisdictional influence and scope, and new concentrations of wealth, this system of trust-making and protection is historically distinctive and structurally constitutive.
When Hindman (2018) spoke of the centripetal forces that concentrate platform company power on the Internet, he also called for ‘simplified stories that can explain both the enormous concentration at the top of the web and the (very) long tail of smaller sites (8).’ I have sought to do something like that here. But these are not so much ‘stories’ about how T&S enables a march towards concentration so much as windows on embedded practices in a curious mode and moment of capitalism, including its banal assumptions and practices of what is good and routine for order-making and enforcement of inequality. In a moment of rapid change, globally, which is being driven by a deliberate and increasingly automated tinkering with capitalist social relations, understanding the work of T&S is vital to a full conceptualization of the changing nature of the political economy of capitalism, how it deepens inequality, and why it requires fortification on new terms.
Acknowledgements
The author thanks Jason Jackson, Salome Viljoen, John Van Maanen, Diane Davis, David Skarbek, Kevin O’Neill, Lauren Wilcox, Thomas Blom Hansen, Jason Sharman and Jude Browne for reading and commenting on the text and/or for arranging research talks. Thanks are extended to the Technology and New Media Cluster in Sociology and the Infrastructural Geographies group at Cambridge, the Centre for Transnational and Diaspora Studies at the University of Toronto, the Department of Urban Studies and Planning at MIT, and the Department of Anthropology’s Urban Beyond Measure initiative at Stanford for hosting discussions.
Footnotes
A pseudonym
The Risk Room ran from 2016 to 2020, folding in the midst of the COVID-19 Pandemic.
This research was carried out between 2015 and 2018, centring on multiple visits to the Bay Area, including a stint as a Visiting Scholar in a university there. During this time, I advanced the study by carrying out interviews and repeat interviews, in person and by Skype where necessary, with individuals first contacted by e-mail or via mutual acquaintances. In this highly insular community, my sample and access was facilitated by early interviewees who provided introductions to current and former colleagues. I spoke with current and former employees of Google, Twitter, Facebook, Apple, LinkedIn, Uber, eBay and Airbnb, among others. This eventually allowed for my participation in the Risk Room, which consisted of attendance in person where possible plus analysis of meeting minutes circulated to all invitees. My work with the content moderation firm ‘Tinkle’ has been ongoing since 2017. Unless otherwise noted, names are pseudonyms. I have anonymized participants, including by some combination of changing names, providing abstracted affiliation, leaving out some mundane identifiers and refraining from providing dates and locations of interviews. Methodologically, this work seeks not to locate trust and safety as a question for discrete companies, but, rather, to locate common patterns and practices across a rapidly developing and highly fluid corporate environment that includes constant movement by employees across firms, and in a highly secretive and guarded social world.
It remained on YouTube for at least 2 years.
There are: ‘Sex and Nudity’, ‘Illegal Drug Use’, ‘Theft Vandalism and Fraud’, ‘Hate Content’, ‘Graphic Content’, ‘IP Blocks and International Compliance’, ‘Self Harm’, ‘Bullying and Harassment’ and ‘Credible Threats’.