Luke Munn – Algorithmic Power, Algorithmic Failure

The algorithm has increasingly suffused into laboring bodies, into domestic interiors, and into urban fabrics. For a platform like Uber this entails new forms of algorithmic governance that ushers drivers to particular locations in the city at particular times of the day, and draws out a specific type of performance understood as ‘best practice.’ For the ‘always listening’ digital assistant that is Amazon Alexa, this means filling the traditionally private space of the kitchen or living room with an invisible new zone of capture. And within a system like Airbnb, the algorithmic indexing of listings exerts unseen pressures on architectures—rearranging apartments, transforming homes into hotels and subtly reconstituting the wider geographies of the city itself. Alongside these consumer-facing examples are less visible but equally significant intrusions made at the enterprise or governmental levels. These come without focus-grouped product names, but determine teacher rankings, credit scores, loan approvals, parole sentences, and no fly lists. More and more, the algorithmic permeates into the processes and people around us, impinging upon society and culture in highly significant ways.

But this we already know. What is less clear is how this shaping is accomplished. How does the algorithmic invest bodies, enlist subjects, move matter, and coordinate relationships? In short, how does an algorithmic procedure attain and exert power?

The algorithmic is typically conflated with code, with a special form of language written by a programmer. If, the argument goes, the user or researcher could only read back this text, then all would be revealed. Mark Marino, to take just one example, states that “we can read and explicate code the way we might explicate a work of literature” (2006). But this text is typically proprietary, only available to employees or selected developers. The moment of enlightenment never arrives and the algorithmic becomes the oft cited black-box, an opaque object unable to be examined or intervened within. A new approach is needed, and indeed the development of these systems has moved away from the monolithic application and adopted a micro-service approach. These are small, targeted services that do one thing and do it well—converting currency, logging miles, tracking ads. Hundreds of these services sit within a wider ecosystem, responding asynchronously to requests as they arrive.

So algorithmic objects can be understood as ecologies, a term which better encompasses this heterogenous mix of cables and wire, bodies and vehicles, capital and code. Ecologies provide a way of “understanding the various scales and layers through which media are articulated together with politics, capitalism and nature, in which processes of media and technology cannot be detached from subjectivation” (Parikka and Goddard, 2011, 1). These elements come together in various ways to carry out activity in the world. But if the world is promising, it is also unpredictable. Here, data becomes messy, subjects turn contentious, space can be antagonistic. Execution, Wendy Chun insists (2008, 304), is not simply a “perfunctory affair.” Nothing is guaranteed. Instead, any power must be incessantly negotiated. The forces exerted by the ‘merely’ technical operations of the algorithmic—storing, searching, indexing, presenting—must accumulate into meta-operations: encapsulating life, enlisting subjects, remaking space, and enchanting users. In focusing on these performances, we move away from secret codes and software to a set of observable and embodied operations that can be analysed.

These operations are political in that they determine the possible. Rather than parties and politicians, this is the politics of protocols (Galloway 2004) and parameters (Rossiter & Magee 2015, 76). Or to update Michel Foucault (1978, 94), one might say the politics of power-relations now become immanent within the algorithmic. Take Alexa, for example. Amazon chose to construct this intelligent agent as a female assistant. In doing so, they tapped into a deep historical vein of gendered service embodied in the female voice, from present-day airport announcements back to Otis elevators and all the way back to Bell’s telephone operators. As Sadie Plant reminds us (1997, 126), this operator routing connections at the switchboard exemplified the role of the woman “poised as an interface between man and world.” The new breed of digital assistants for the smarthome: Microsoft Cortana, Apple Siri, and Google’s Project Majel—simply continue this lineage. The gendered telephone exchange thus establishes the precedent for the gendered Internet of Things exchange. Alexa-as-interface builds directly atop the older concept of woman-as-interface. In doing so, the gender politics of Silicon Valley are infused into the qualities and responses of ‘Alexa’ and in turn determine our range of possible activities and agencies.

Indeed, we can continue to follow this ecology, investigating the microprocessors that power the ‘cloud-based’ Alexa. These Intel chips themselves are based on years of research and development originating in Silicon Valley. As Pellow and Park demonstrate (2002, 114, 120), a particular work force built these chips, one composed of Latino and Asian-American women, considered a more docile labor by management; working with highly toxic chemicals day after day, they developed nausea, migraines, skin burns, cancers and long-term reproductive debilitations. Across times and spaces, a logic of extraction from the servile female remains constant. Thus Alexa’s ‘effortless’ experience is built atop historical and cultural operations that supplement its technicity in crucial ways. Infused into technicity, these parameters inform how we see ourselves and others.

And yet this power is never totalizing. A certain “grammar of operations” (Fuller 2005, 167) must be performed in order to map subjects and spaces, draw them into a functional sequence, and exhaust their productive potential. But in moving from whiteboard to world, the algorithmic must deal with a diverse set of materials—materials which can be antagonistic to its objectives. Operations are never guaranteed, but must be negotiated. What occurs when these operations are unsuccessful? To explore this, we can sketch out a failure of algorithmic power taking place within a particular ecology—Uber.

Uber as Algorithmic Failure

Uber’s worker starts life as a data-object. The object specifies the properties that represent the platform’s so-called Driver-Partner: name, city, rating, current status and so on. The rich life of the subject is thus mapped onto an internal informational schema, a process I call encapsulation. Within Uber’s inner world, every Driver-Partner is abstracted into a collection of variables or parameters. This abstraction is productive, constructing a generic object that can be tracked, messaged, rated and compared against other drivers. But to abstract is also to ignore—including these properties means excluding other possible understandings of identity: race, culture, class. The result is a generic driver, interchangeable with any other. Important complexities and contingencies are not encapsulated, leaking out of this strict envelope of subjectivity. So Uber’s understanding of the worker is universal, fungible—a driver is a driver. And this thin understanding recoils on the ride-share company in various ways.

Every time a Rider requests a ride, a driver needs to be there. And they not only need to show up, but to perform a professional and timely service. In other words, Uber requires an enlistment of the worker towards a specific objective. This operation is made all the more difficult by the conditions of platform labor—conditions which insist on the autonomy of the remote worker, ruling out many of the standard employer/employee tactics. A cluster of operations—timed messaging, gamified missions, citywide campaigns, surge notifications—attempt to direct drivers to a particular understanding of ‘best practice’ labor conducted in particular places at particular times.

But enlistment can only operate on the understanding of the Driver that Uber has encapsulated—a universal everyman, a generic caricature. This explains why, for instance, Uber’s attempts to funnel workers into shift work have been largely ineffective, and why many drivers ignore mechanisms like Surge pricing altogether (Lee et al. 2015, 5). These ‘targeted communications’ largely miss their target and instead fall on an abstracted, algorithmically constructed subject that often fails to incorporate the complex and varied motivations unique to each worker. As Rosenblat and Hwang argue (2016, 4), the universal platform mistakenly sees the labor pool as monolithic, a “relatively equivalent mass”. A clear gap begins to emerge between the worker and Uber’s understanding of the worker. One byproduct is turnover—more than half of all Uber drivers quit within the first year.

For those who continue to drive, a particular subjectivity is desired. It is not enough to simply show up and perform a task. The task must include an affective element. For Uber this often entails a smile, a conversation, an offer of bottled water or mints. Technicity cannot code for this ‘affective labor’ (Hochschild 2003), because it must appear to be spontaneous, improvised, from the heart. The task must also be made algorithmically legible. Here, again technicity reaches its limits—only so much information can be gleaned from the smartphone and its sensors. Instead, Uber seeks to enchant the worker, drawing out a particular subjectivity that accommodates itself to the algorithmic. This entails the worker “turning to face the algorithm” (Gillespie), and attempting to mirror the desired response. In playing to strengths and overlooking weaknesses, the subject supplements ‘pure’ technicity in important ways, tuning activities for maximum recognition: views, stars, results.

But Uber’s ineffective encapsulation and enlistment instead often disenchants the worker. Disillusioned, drivers work to obfuscate rather than make legible, discovering ‘hacks’ and share them on forums. For example, one technique to reset bans is to log off immediately and log on again. Here the gap between subject and Uber’s understanding takes the form of a temporal distinction—a difference between the smooth, cohesive time of the subject and the syncopated temporality of the platform. Far from being glitches or errors, these techniques rely on the very consistency of computation—logically working with its internal (and inevitably partial) understandings. Rather than ‘breaking’ the system, they are better understood as immanent to it, widening a fundamental gap that already exists, the gap between subjects and their algorithmically understood counterpart.

Today, power is conducted through the prism of the algorithmic. This power is never given or assumed, but must be incessantly performed through a set of operations. These technical operations—instantiating objects and indexing data—must coalesce into meta-operations—creating subjectivities, forming relations, and directing work. In carrying out encapsulation, enlistment and enchantment, algorithmic platforms exert significant force on subjects. Yet the opposite also applies—when this grammar of operations is partial or unsuccessful, traction is not attained and a gap between subject and referent emerges. As each new technique is added, the gap between subject and referent only increases. In this sense, the algorithmic is often constructed, not unlike finance, as “long chains of increasingly speculative instruments that all rest on the alleged stability of that first step” (Sassen 2014, 118). Instrumentalizing this discrepancy suggests more intentional and effective interventions in the algorithmic regimes that increasingly shape our everyday.

Luke Munn uses the body and code, objects and performances to activate relationships and responses. His projects have featured in the Kunsten Museum of Modern Art, the Centre de Cultura Contemporània de Barcelona, Fold Gallery London, Causey Contemporary Brooklyn and the Istanbul Contemporary Art Museum, with commissions from Aotearoa Digital Arts, and TERMINAL. He is a Studio Supervisor at Whitecliffe College of Art & Design and a current PhD Candidate at Institute for Culture and Society, Western Sydney University.

Works Cited:

Chun, Wendy Hui Kyong. 2008. “On ‘Sourcery,’ or Code as Fetish.” Configurations 16 (3):299–324.

Foucault, Michel. The History of Sexuality. Translated by Robert Hurley, Pantheon Books, 1978.

Fuller, Matthew. 2005. Media Ecologies: Materialist Energies in Art and Technoculture. Cambridge, MA: MIT Press.

Galloway, Alexander R. 2004. Protocol: How Control Exists after Decentralization. Cambridge, MA: MIT Press.

Gillespie, Tarleton. “The Relevance of Algorithms.” Media Technologies Essays on Communication, Materiality, and Society, edited by Tarleton Gillespie et al., MIT Press, 2014, pp. 167–94.

Hochschild, Arlie Russell. The Managed Heart: Commercialization Of Human Feeling. University of California Press, 2003.

Lee, Min Kyung, Daniel Kusbit, Evan Metsky, and Laura Dabbish. 2015. “Working With Machines: The Impact of Algorithmic and Data-Driven Management on Human Workers.” In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 1603–1612. ACM. https://www.cs.cmu.edu/~mklee/materials/Publication/2015-CHI_algorithmic_management.pdf.

Magee, Liam, and Ned Rossiter. 2015. “Service Orientations: Data, Institutions, Labor.” In There Is No Software, There Are Just Services, edited by Irina Kaldrack and Martina Leeker, 73–89. Leuphana, Germany: meson press.

Marino, Marc. 2006. “Critical Code Studies.” Electronic Book Review. December 4, 2006. http://www.electronicbookreview.com/thread/electropoetics/codology.

Parikka, Jussi, and Michael Goddard. 2011. “Unnatural Ecologies.” The Fibreculture Journal, no. 17:1–5.

Pellow, David, and Lisa Sun-Hee Park. The Silicon Valley of Dreams: Environmental Injustice, Immigrant Workers, and the High-Tech Global Economy. New York University Press, 2003.

Plant, Sadie. Zeroes + Ones: Digital Women + the New Technoculture. Doubleday, 1997.

Rosenblat, Alex, and Tim Hwang. 2016. “Regional Diversity in Autonomy and Work: A Case Study from Uber and Lyft Drivers.” https://datasociety.net/pubs/ia/Rosenblat-Hwang_Regional_Diversity-10-13.pdf.

Sassen, Saskia. Expulsions: Brutality and Complexity in the Global Economy. Harvard University Press, 2014.

Advertisements

12 Comments

  1. Hi Luke,

    Great to get a chance to read this text, and the main question is very important for all to know and of course, examine — as in what’s running the algorithmic, networked universe out there. In fact, I think it links strongly to some of my own questions in ‘Unlocking Proprietorial Systems’, and emerging cultural and technical blockchain defaults. The text I’ve written will now include algorithmic questions in the larger version of my own text, using some of your ideas, and referenced;-)

    When you discuss algorithmic objects to be understood as ecologies, as “a term which better encompasses this heterogenous mix of cables and wire, bodies and vehicles, capital and code.” The understanding you provide for this is from Jussi and Goddard’s ‘Unnatural Ecologies’. I’m wondering whether there was a tinge of perceiving it in terms of defining it through a media as archaeology approach also – wondering what this would look like?

    This also — “Today, power is conducted through the prism of the algorithmic. This power is never given or assumed, but must be incessantly performed through a set of operations.” … is significant.

    I’m wondering how you see algorithms, in parallel to marketing intentions beyond the technicity of things, especially when you allude to biopolitical ideas, and networks combined. So, what political concepts and expressions do you feel define where you stand critically on this subject?

    Wishing you well.

    marc

    Like

  2. Hey Marc, Thanks for the reading and the helpful comments.

    So in terms of Ecologies, the key texts would be Guattari of course (3 ecologies), but Erich Hoerl’s recent volume on General Ecologies is a good update, though Hoerl is more philosophical as opposed to pushing it far enough as method, imho.

    But as you mentioned Parikka and media archeologies go together – so what would that look like? It’s actually been interesting to do this research by ‘following the ecology’, which forced me to look at microprocessor history in 1970s Silicon Valley, or to look at Bell switchboard operators in 1910 New England, or the emergence of risk in 18th century marine insurance in Britain. So definitely feels like the two could go together nicely. 🙂

    In terms of political concepts, as I mention in the text, politics becomes that of parameters. This doesn’t mean the State disappears, but is rather reconfigured. I’ve been working (struggling through) a text defining what algorithmic sovereignty looks like, a power that can be taken up by corporations as well as nation-states in various ways. So if politics is a determination of the possible, then an awareness of algorithmic operations–what they want and how they achieve them–allows us to make more effective interventions in our world.

    Liked by 1 person

    1. Hi Luke,

      Thanks for such an informed reply,

      In response to the notion of the state being reconfigured, and defining what algorithmic sovereignty may be in the near future. I suppose it gets problematic when much of the high-end, technology is invisible to most people, and they are not able to critique or make, decisions on how the algorithms are affecting their own lives. If the data of these algorithms are owned by private companies, governments and states; things get really difficult.

      “There is rarely a technical solution to social and political problems. Technology might be helpful in many ways, but only if its interrelations with social and material circumstances are part of the picture.” Tobias Matzner – http://bit.ly/2D5veyq

      The above quote by Matzner is ironic, because if we think for a moment and see ourselves as part of a right-wing contingency, controlling bots, trolls, and marketing on social media via algorithmic data, supplied by the large social media platforms of the user-database, then it’s about how rich people are, and how they can by democracy for their own political gain. The truth is, if we are discussing the opening up the black box, it would be second-hand information sold to the highest bidder.

      However, Matzner’s proposition that “if its interrelations with social and material circumstances are part of the picture,” is useful, but only if the organisation (of which there are few) is dedicated to self-critique as a regular process in challenging dodgy, algorithmic practices.

      Wishing you well.

      marc

      Like

      1. Yeah the Matzner quote is interesting, and something Evgeny Morozov, for instance, has extensively critiqued with his notion of technosolutionism. Wealth, inequality, capital, all matter deeply of course. And indeed this is why I have problems with hand-wringing calls for ‘algorithmic governance’ from some scholars – in asking companies to make things transparent, to make things fairer, they’re essentially asking for a form of kind capitalism – and its never gonna happen.

        But equally I think we get in trouble if we start to contrast ‘humanity’ with technicity – my whole approach with ecologies is to say that they’re thoroughly entangled. When your Uber driver responds to a command, chats with you, and drops you off as a core part of a technical routine, where does the human start and the technicity end?

        Like

  3. Hi Luke,

    The approach of the NGO Algorithm Watch may be relevant to this discussion. https://algorithmwatch.org/en/

    Also, when thinking about the algorithmic, what I’m missing in this discussion is a deeper look into today’s prevalent mathematical and logical models (Boleean logic, set theory…). Or is it in your bibliography and I simply don’t recognize it? These models, axioms and logics shape what/who is included and what/who gets excluded.

    I like your mentioning of the algorithm as an speculative instrument, and will take this as an inspiration to further explore.

    best
    Francis

    Like

  4. Thanks for the comment Francis and the link, I had seen that organization previously but hadn’t caught up with their latest work.

    So I guess in terms of the lack of mathematical model discussion, the quick response is that – with the notion of an algorithmic ecology – I’ve massively expanded the scope of what gets included in the algorithmic. So a 2000 word blog post only focuses on a small subset of that, and here that’s a more ‘macro-level’ view of the Uber driver.

    But looking through the Uber engineering blog, what becomes interesting is just how little these mathematical details seem to matter. Of course, some of that is disciplinary, and we need to interrogate those assumptions. But the focus for engineers is always a broader objective, and they talk in human terms, not code, about making things more scale-able, responsive, or resilient. This also means that the math, the models, and the coding language change frequently in order to better accomplish certain tasks. So the question for me becomes, ‘what are those tasks?’ What are the operations the algorithmic needs to enact on subjects and spaces? What forces are necessary in order for an Uber driver, for example, to become ‘enlisted’ into this broader objective?

    🙂

    Like

  5. Hi Luke,

    The first thing I’d say about the paper (apart from that I really enjoyed reading it, of course), is that I’m not sure that algorithmic is necessarily the right description of what you are talking about – or at least some of what you are talking about. What do you mean by algorithmic? And how does that differ from digital power, or data power, or tech power? I’m thinking specifically of the examples of gendered and racialised legacy bias and inequality. Are these algorithmic? They certainly feed into the whole patriarchal power assemblage of digital technology, but I don’t see them as necessarily algorithmic, as opposed to biased datasets, for example. Maybe I’m being a bit pedantic, I don’t know, but in the spirit of providing some (hopefully) constructive feedback, I’m going to say what first struck me.

    Moving on to Uber makes much more sense with the title, yet my overall feeling for the paper is that it perhaps would be sharper if it were organised around the tensions between subject and algorithm – as in the data subject/worker constructed as a hybrid of bits of other data subject/worker’s algorithmically mediated data (see Amoore, for example), in tension with the actual data subject and their (dis)ability to resist/control/subvert the algorithmic version/projection of themselves. When you talk about Algorithmic Power and Algorithmic Failure, it seems far too broad a topic.

    As I read the paper, there were several points where I thought you might find it really useful to read some of Louise Amoore’s work (if you haven’t already, of course). In particular her work on The Politics of Possibility, Securing with Algorithms and recent work on the hybrid data subject in medical AI machines – also especially on algorithmic/biometric sovereignty (as per one of your comments).

    I hope this is useful in some way – obviously please feel free to ignore/disagree!

    Pip

    Like

  6. Hey Pip,

    Thanks for the comments and feedback.

    My PhD is around algorithmic power, and you’re right in pointing out that a lot of the ‘setting up’ of this concept falls by the wayside in this short essay. Could we use other terms? For sure. ‘Data power’ for instance, was used by a recent conference where a lot of these same issues were discussed. For my two cents, other terms also have down sides. For instance, the ‘digital’ is often seen as a very recent phenomenon, whereas it has a 200 year etymology, and ‘tech’ is simply too general and presents challenges about focus, e.g. a hammer or spade is also techne in this sense.

    One of the productive things in the algorithmic for me is the notion of a set of operations, a performative aspect. So the products might start with that ‘biased dataset’ you mentioned, but then begin exerting force with it – taking it up and applying it to subjects, who in turn respond to these operations with their own set of moves. The same kind of performative feedback takes place within cities in response to Airbnb, resulting in the shaping of the urban fabric (housing becomes unregulated hotels). In this regard, the traditional notion of ‘data’ starts to feel very flat and static – an object which was always already there. Indeed the ostensibly apolitical connotations of data often emerges from this kind of already-there existence, “given X, sort Y”.

    So yes I do disagree that this is far too broad – indeed I think we need to develop a critique of algorithmic regimes that goes beyond the single object (e.g. Uber) and contributes to a political agency by attempting to unpack a grammar of common operations. But a critique and defense of ideas is never a bad thing. 🙂 I definitely like Amoore’s work so will check out those papers you mentioned.

    cheers, Luke

    Like

  7. Dear Luke, thx for sharing! The idea of a subject mapped into an informational schema generating a generic object is relevant to my research. I was wondering about the expression generic object and making sure that generic wouldn’t allude to the “less real” or “more abstract” nature of the object. On the contrary, as you say, the object produces an alignment in the subject which behaves complying with the expectation of the algorithm. I think this alignment produces the embodiment of the generic object, or the becoming body of an algorithm – or, even better, the becoming body of the generic object generated by an algorithm analyzing inputs coming from the particular subject. The becoming body of the generic object can be seen as the parallel and opposite process of eye (dis)embodiment which defines instead the becoming algorithm of a body (which is its transformation into the generic object). I think the gap between the two processes is the possible space of failure and malfunctioning, and I like the idea you propose of expanding this space rather then immediately breaking it – which is in any case currently not possible. The generic object is the mediator of the transformation of a body into an algorithm and of an algorithm into a different body (the body produced by the generic object, re-embodied into the body of the particular subject), and the possibility of failure should be found in the temporal dimension of this gap, as you suggest by looking at
    Uber failures. I actually think that the complex retro-active temporal dimension of predictive algorithms is the temporal logic that needs to be failed to stop the generation of the generic object and its re-embodiment- a temporal logic based on a paradoxical “influence of the future over the past” (to say it with Casares and his book The Invention of Morel), ands which is at work in the re-embodiment of the generic object. Looking forward to talk more!

    Like

    1. Hey Mitra, Thanks for the comments – glad you found the concepts interesting. I think your idea of the production of two bodies is right on. On the one hand you have this generic subject which diverges away from the rich messiness of the lived subject in important ways, creating a space for intervention. But on the other hand, you have what you’ve identified as ‘re-embodiment’ and Gillespie calls ‘turning to face the algorithm’, in which subjects accommodate their practices to this generic object in order to make them more visible to algorithmic operations. In other words, subjects recognize the limits of the generic body, and attune their techniques for attain maximum legibility. Here we might think of adopting a particular facial expression to pass airport security, or tweaking hashtags to get more likes, or an Uber driver focusing on an area or particular passengers that will boost her rating. To leverage the algorithmic, we quickly learn what matters and what doesn’t – discarding the extraneous and fine-tuning the parameters that are acknowledged. In doing so, we ‘re-embody’ the assumptions of the algorithmic, narrowing the gap between generic object and specific subject.

      Like

  8. Hi Luke,

    Interesting way to discuss the irrationality of some algorithmic systems. This is of interest for us as well, how something that is presented as the epitome of the neoliberal rationalism, is intrinsically irrational, even in relation to its own purposes. Your Uber analysis is really intriguing.

    You say “A new approach is needed, and indeed the development of these systems has moved away from the monolithic application and adopted a micro-service approach.”. It’s not clear to me how a micro-service approach would help with the asymmetries of information between the system and the user, I would kindly propose that this is a property of a distributed / non-centralised system (perhaps blockchain is the right word here). Micro-services though bring another interesting point, which is the distribution of responsibility. A software engineer is responsible for developing one thing and developing it well. The emergent properties of these ecosystems are not understood by the micro-services developers because they are too focused on a single-responsibility-component (as a side note: by examining the traditional terms on software engineering you can understand how these concepts defuse/remove ethics from the process e.g. https://en.wikipedia.org/wiki/Single_responsibility_principle).

    Similar comments could apply to your use of “encapsulation”. This concept is very familiar to software engineers who design such systems. Encapsulation as a property of a designed system helps reason about the design decisions and reduces complexity to become manageable. However, reality lacks singular responsibilities and encapsulation is, as you say, a weak abstraction that excludes/ignores properties of the human in the loop.

    The “algorithmic” is a property of a system designed by people with such mental patterns. I would suggest that some of the traditional software engineering books could provide a great source of information for these biases. These are not only technical manuals but have been influencing the language and thinking processes of the architects of these systems.

    The Mythical Man-Month by Frederick Brooks
    The Cathedral and the Bazaar by Eric S. Raymond

    Also, if it helps Bruno Latour in his essay Morality and Technology touches the ethical aspect of the micro-service approach.

    Dionysia – Panagiotis

    Like

    1. Hey Dionysia and Panagiotis, Thanks for the comments, which help me to sharpen some points, i.e. communicate them better.

      The microservices point is simply that the current software studies framing of the algorithmic is inadequate, precisely because there is no single block of code to be studied as text, but rather many modules within an environment that interact together. The force of the algorithmic emerges, in part, from the interaction of these operations, working in conjunction with other materialities and agencies (i.e. a GPS microservice + smartphone accelerometer + rotation of earth + body of driver, etc). In other words, it is emergent. You note that this diffused production creates a diffused responsibility, which is something I hadn’t thought of before, so definitely an idea to pursue. 🙂

      Then in terms of encapsulation, I’m familiar with the notion of variable or object encapsulation, considered best practice in object-oriented programming. Clean, understandable code is produced through Classes, which have particular properties and functions. I’m actually using encapsulation in a broader sense, essentially looking at how reality is mapped onto an internal schema – some things are internalized and acknowledged, others are externalized and ignored. Will definitely sharpen this definition up in the PhD text. 🙂

      Thanks for that Latour reference too, that will provide some nice theoretical underpinning to the idea of diffused ethics.

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s