The Structuring Work of Algorithms
Algorithms reflect how power is arranged within our society while also producing power dynamics themselves. Algorithmic systems configure power by engaging in network-making, thereby shaping society and entrenching existing logics into infrastructure. To understand the moral economy of high-tech modernism, we must explore how algorithmic systems contribute to ongoing social, political, and economic structuring. This essay reflects on the importance of algorithmic systems’ positions within our political, economic, and social arrangements.
Henry Farrell and Marion Fourcade call on us to consider “the moral economy of high-tech modernism.”1 They emphasize how algorithmic systems are engines of categorization that produce power dynamics by structuring the social, political, and economic order. Yet algorithmic systems are also configured by networks and data infrastructures. Algorithms not only produce power but reflect the power arrangements within which they operate.
Algorithms, like bureaucracy, structure things, making the categories through which they exercise power over society. Their category-making is entangled with the categories that the state creates for its own power-making purposes.2 Algorithms appear to turn disorganized data into seemingly coherent networks, but the product of this process often reifies and amplifies existing power arrangements. No matter what categories are created, these are at most frozen slices of a larger and perpetually shifting whole, fleeting and provisional summaries of one part of a very complex set of network relations. Thus, it is in the network itself that the real power relations remain.
In laying out a theory of “power in networks,” sociologist Manuel Castells leverages the language of “programming” to analyze the power that social actors assert when purposefully structuring the institutional, social, economic, and political arrangements of society. Castells is particularly interested in what he calls “network-making power,” or “the power to program specific networks according to the interests and values of the programmers, and the power to switch different networks following the strategic alliances between the dominant actors of various networks.”3 Inverting this programming language to examine machine learning systems that exert power in society reveals why algorithms feel so unsettling. The power that stems from these systems is rooted in their network-making work as much as in the authority and centrality they are given.
Many twentieth-century equality movements center on revealing the disparate experiences and centuries-long discrimination of people based on socially constructed categories. In the United States, the Civil Rights Act of 1964 sought justice by reifying categories–race, color, religion, sex, national origin–to reclaim them, which made these categories infrastructural, requiring the state to collect data about people in relationship to these categories to support antidiscrimination claims.
Machine learning algorithms do not require a priori categories because they can be designed to cluster nodes based on available features or operate across multidimensional networks without clustering. Even though sociologists have long highlighted how network position matters, antidiscrimination laws have no conception of networks.4 After all, antidiscrimination laws start with the notion of a “class” of people. Scholars who evaluate the discriminatory work of algorithmic systems invariably begin by evaluating how algorithms develop clusters and correlations that can be mapped onto known categories, either directly or as proxies.5 And, indeed, when algorithms are flagged as racist in social discourse, critics revert to the social categories maintained by the state that have dominated sociopolitical consciousness.
The real danger of algorithmic systems–and their network-making power–is that they can be used to produce a new form of discrimination, one that cannot easily be mapped onto categories but can help enable and magnify social inequity all the same. Contending with the moral consequences of these systems will require a new framework for evaluating and remedying inequity, for these systems can easily be designed to evade the categories that ground legal frameworks.
Algorithms run on data, but data are made, not found. Data are never neutral or objective; they are socially produced.6 What data exist–and what do not–stem from social choices. What data algorithms see–and what they do not–are also shaped by social choices. In other words, algorithms are not the only socially constructed system in this sociotechnical arrangement, nor the sole source of power. Algorithms can be made transparent and interpretable but still be manipulated to produce dangerous outcomes when actors toy with the underlying data.
In the early part of the twentieth century, Congress wanted to depoliticize the allocation of representatives upon which the United States’ political system depends. Every decade, after census data were delivered to Congress, fights would break out in the halls of the Capitol as politicians argued over how to divvy up the representatives, and how many more representatives to add. The proposed solution, eventually adopted in 1929, was to predetermine both the number of representatives and the algorithm used for apportionment.7 This algorithmic “solution” did not render the census neutral; it simply shifted the locus of politicization to the data infrastructure.8
Contemporary debates around the power of algorithms often highlight how biases in underlying data can affect the model.9 Yet the response to these critiques is often a call to “de-bias” the data, as though an idealized “neutral” data set were possible. Once an algorithmic system is situated as a powerful social actor, those seeking to configure the system to their advantage shift their attention to shape the data infrastructure upon which those systems depend. Algorithms do not make a system more neutral; they simply reconfigure the site of manipulation.
Farrell and Fourcade rightfully highlight the limits to the pursuit of fairness and justice through algorithmic systems, flagging examples of naive thinking on the part of algorithmic dreamers. Technology, like bureaucracy before it, cannot fix intractable social ills. Rather, technology is consistently leveraged to codify the values that its makers or users wish to make rigid. The algorithmic systems that plague us today were born out of a variant of late-stage capitalism that is driven by financialization. With the backing of venture capital, most of these systems grow like cancer, killing some cells while replicating malignant ones in the interest of those making economic bets. Whatever morality exists in this version of capitalism is centered on individual gain at the expense of the collective. That logic is embedded in countless algorithmic systems.
Algorithms are not inherently evil, but their position within a political, economic, or social arrangement matters.10 Consider “scheduling software,” a category of tools designed to allocate job shifts to workers. Such tools can be designed to optimize many different interests. If workers were asked to indicate their preferences–what start times are ideal, how many hours they wish to work, who they wish to work with, and so on–the system could be structured to optimize workers’ interests.11 But these systems are rarely designed this way. Employers who buy these systems seek scheduling tools that are designed to maximize their interests. The resultant tools often ensure that few workers are given enough hours to be eligible for benefits. Shifts are commonly allocated to prevent unionization by minimizing opportunities for workers to develop relationships. These algorithmically generated shifts are rarely consistent or announced in advance, providing workers little room to navigate childcare, let alone the flexibility to hold a second job to make up for the incomplete hours. This is not by accident. The choices in the design of these algorithms maximize the financial interests of management at the expense of workers, actively structuring networks to disempower workers without legally discriminating against them.12 The problem in this arrangement is not the algorithm, but the financial and political context that makes this design acceptable.
High-tech modernism is not a radical break from high modernism, but an extension of the very same logics. It is reasonable to focus on the new instruments of power, but it behooves us also to keep track of the social, political, economic, and structural arrangements that enable algorithms of interest to emerge, as well as the networks that such technologies rely on and reinforce.
Endnotes
- 1 Henry Farrell and Marion Fourcade, “The Moral Economy of High-Tech Modernism,” æ岹ܲ 152 (1) (Winter 2023): 225.
- 2Geoffrey C. Bowker and Susan Leigh Star, Sorting Things Out: Classification and Its Consequences (Cambridge, Mass.: The MIT Press, 2008), 1; and James C. Scott, Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed (New Haven, Conn., and London: Yale University Press, 1999), 8.
- 3 Manuel Castells, “A Network Theory of Power,” International Journal of Communication 5 (2011): 773.
- 4 Ronald S. Burt, “,” Social Forces 55 (1) (1976): 93; and Mark S. Granovetter, “The Strength of Weak Ties,” American Journal of Sociology 78 (6) (1973): 1360–1380.
- 5 Michael Feldman, Sorelle A. Friedler, John Moeller, et al., “,” in Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (New York: Association for Computing Machinery, 2015), 259–268; and Wendy Hui Kyong Chun and Alex Barnett, Discriminating Data: Correlation, Neighborhoods, and the New Politics of Recognition (Cambridge, Mass.: The MIT Press, 2021).
- 6Catherine D’Ignazio and Lauren F. Klein, Data Feminism (Cambridge, Mass.: The MIT Press, 2020); and Theodore M. Porter, Trust in Numbers (Princeton, N.J.: Princeton University Press, 2020).
- 7Dan Bouk, (New York: Data & Society, 2021).
- 8Dan Bouk and danah boyd, “” (New York: Knight First Amendment Institute at Columbia University, 2021).
- 9Joy Buolamwini and Timnit Gebru, “,” in Proceedings of the 1st Conference on Fairness, Accountability and Transparency, ed. Sorelle A. Friedler and Christo Wilson (New York: Association for Computing Machinery, 2018), 77–91; Kadija Ferryman, (New York: Data & Society, 2018); and Tolga Bolukbasi, Kai-Wei Chang, James Zou, et al., “Man Is to Computer Programmer as Woman Is to Homemaker?
- 10Andrew D. Selbst, danah boyd, Sorelle A. Friedler, et al., “,” in Proceedings of the Conference on Fairness, Accountability, and Transparency (New York: Association for Computing Machinery, 2019), 59–68.
- 11 Ethan Bernstein, Saravanan Kesavan, and Bradley Staats, “,” Harvard Business Review, December 2014; and Min Kyung Lee, Ishan Nigam, Angie Zhang, et al., “,” in Proceedings of the 2021 AAAI/ACM Conference on AI, Ethics, and Society (New
York: Association for Computing Machinery, 2021), 715–726. - 12Alexandra Mateescu and Aiha Nguyen, “,” Data & Society Explainer (New York: Data & Society, 2019).