Francis Hunger – Computational Capital

This extract of a longer research discusses, how electronic databases got situated in the midst of the formation of computational capital. It looks at how databases form the infrastructural part of an ongoing media-related and social shift, where subjectivity can no longer be thought as a whole, rather as fragmented, scattered. These partial subjectivities, stored in databases, emerged along fading privacy and the demise of the individual in an indivisible, modernist sense, towards the emergence of the dividual.[1]

Museo Cospiano
Guiseppe Maria Mitelli: Museo Cospiano. In: Lorenzo Legato: Museo Cospiano. Bologna 1677. archive: Universitätsbibliothek Göttingen/GDZ.

Historically electronic database technology can be traced back to the techniques of collecting, sorting, saving and exhibiting information[2] in libraries, museums, company and government files, and similar collections. In media theory the term »database« is in broad use, addressing both electronic databases and any other themed collection. A narrowed definition is: An electronic database is an infrastructure for the structured storage of information.

It is a set of software applications, that – as most infrastructures – does not exist by itself, but consists of different infrastructural stratifications. Most commonly it consists of a query language, usually oriented on natural languages such as English, and able to manipulate sets of information in the domain of mathematical-logical symbols. This query language can be embedded into a higher programming language and in host-systems, which often provide the user interface. A translation and optimization component translates queries into machine code and names of fields and tables into memory addresses and vice versa. Another part of the system is responsible for logging, transactional security, concurrency control and managing user access rights. It addresses memory, often on hard disk and growingly on flash memory devices.

Database as discourse

One of the overlooked texts from cultural theory, that can be made productive for media theory was authored by the American professor of media studies Mark Poster, who developed a longer chapter Databases as Discourse, or Electronic Interpellations in his 1995 book The Second Media Age. His arguments can shed light on the bio-politics of the database, and after discussion will be extended by the notions of transactional data production and computational capital.

Poster proposes to refer to Foucault for an understanding of how power is constituted from both action and knowledge. The analytic task should no longer be situated in the act only, but in language, since databases are situated in the symbolic field and are representations of reality. With Foucault he describes that subjectivity is inscribed into human beings through language. Language itself is subjected to the influence of institutions. In a complex interaction, ideology is inscribed into subjectivity by the process of interpellation (Althusser), or hailing.

»The operation of linguistic interpellation requires that the addressee accept its configuration as a subject without direct reflection in order to carry on the conversation or practice at hand. Interpellation may be calibrated by gender, age, ethnicity or class or may exclude any of these groups or parts of them« (Poster 1995, 80). The addressee – supposedly voluntarily – subjects him/herself to the internalized constraint, to accept his/her subject position as ascribed by the institution. However, the subjects’ position is never final, it stays open and up for re-negotiation and such also opens the horizon for resistance and re-orientation.

From this vantage point, Poster describes the database as »discourse of pure writing that directly amplifies the power of its owner/user« (Poster 1995:85). Here he maintains that in difference to spoken language, a database is authorless in that sense, that is has too many authors to be able to identify them. The power is mediated through the ownership of the database by an entity, such as an institution, a company, the military, the library or the university. This institutional mediation enables acts of hailing the subject. He implies further that this dissolving authorship leads to a situation, where nobody can be held responsible for what is collected in databases and how.

We can extend and shift his argument if we add, that databases preconfigure what is recorded and processed in and through databases. In all aspects of this pre-configurations humans are actively involved, administrators, managers, engineers, user interface designers, and politicians. Though there may be too many authors of the information that is filed in a database system, Poster misses that the system itself in all its complexity is authored by an identifiable amount of creators within institutions – potential addressees of political demands.

Another agent in the field of database discourse, another author, is the user. Today with each query on a search engine, with each spatial movement (recorded by smart phones), with each act of consummation, users produce voluntarily and involuntarily data, that is transactional data. On a first glance one may think that this is a phenomenon of the 2000s, namely of Big Data, the promise to record everything. Poster reminds us, that already when credit card payment became a working infrastructure in the 1970s, the recording of (consumer) actions was established on a regular basis.

Innovation Trends in Data Center Technology Think Tank, Dell 2013 Workshop (c) Dell Inc.

2 Transactional data production

The term »surveillance« is misleading. Where the Liberal consciousness identifies data collection as an act of control directed towards the individual, the argument to be introduced is, that the recording in majority is not about surveillance, but about the production and extraction of data. The framing of data recording as surveillance is a strong narrative, fitting well into Libertarian ideology, in theory, in pop culture and in politics.

It provides a vulgarized, digestible explanation on an individual, almost narcissist level for the blackboxedness of database systems, or broader spoken, computational infrastructure. Of course surveillance takes place in the form of policing at most different social levels and thus has become a bio-political practice. But it should not be confused with transactional data production and the subsequent algorithmic management of labor.

What we face is a new regime of data production, a documented hailing to and recorded extraction from every participant in the social field. Each action, even the seemingly non-productive, for instance the querying of navigation systems, acts of consumption and payment, the usage of infrastructure such as water from a tap, or reading (from an electronic device) has turned into an act of data production. From the extraction of transactional data, based on human actions, evolved the massive production of epistemic value.[3]

In 1489 Fra Luca Pacioli published a chapter called Particularis de Computis et Scripturis that prominently discussed techniques of double entry book keeping. To the merchant’s note-taking a specific succession of actions, an algorithm, was applied, which ensured that each transaction was recorded twice. Any amount which was recorded in a specific account as debit had to be simultaneously laid down in another account as credit. The double recording of one and the same transaction created a new epistemic relation, a relation between a periodical logic of entry and exit and a topical logic of goods and capital. Receipt-based recording evoked a paper-trace basis of trust and enabled a growing capital borrowing (Lauwers und Willekens 1994; Fischer 2000).

Luca Pacioli
Jacopo de Barbari – Ritratto di Fra Luca Pacioli, ca. 1495.
Double-entry ledger from J. and H. Hadden and Company Limited, Hosiers, Nottingham (source: University of Nottingham, Manuscripts and Special Collections, Ha A 3-5)

Transactional recordings using double-entry bookkeeping thus helped to establish a complex and de-personalized commercial practice. This new transparency blended in with a larger trend of de-personalization of capital in the early renaissance age, or like the economist Rob A. Bryer notes: »double-entry bookkeeping emerged, as capital became socialized, in response to a collective demand from investors for the frequent calculation of the rate of return on capital as the basis for sharing profits« (Bryer 1993, 114f.).

What we witness from now an is the steady algorithmic production of data (in relation to profit and property) to generate knowledge, which provides the individual merchant with an advantage against his competitors and which allows unrelated capital donors to invest in a trade.

Given the impact of transactional data on economical processes from the 14th century on, the current expansion of transactional recording appears in a different light. What currently is perceived as excessive increase of data collection (or as surveillance), is in fact the expansion of the production of a specific kind of data, of transactional data. We witness the early traces of another incarnation of the capitalist economy.

Which new semantic relations get established? How does surplus data change the political, social and economic spheres? How does it change culture? How does this new epistemic quality change social mediation and media?

Implosion of the private

Until recently the differentiation between private and public sphere was structured in a specific way. This distinction is best illustrated through the pre-modern village, where everybody knows everybody and all personal actions are known and public. In modernity, the city began to function differently: it enabled anonymity, a feature enjoyed by its liberal citizenship.

In contrast, the contemporary city, pervaded by mobile phone networks, wireless internet access points (able to track all mobile devices that move through their footprint), satellite based navigational information and networked card payment methods tears this promise of individual freedom apart: »Individual actions now leave trails of digitized information which are regularly accumulated in computer databases …« (Poster 1995:65).

Each interaction, especially acts of consumption and of spatial movement produce transactional data that is recorded. And there is no conspiracy to it – the expansion of producing transactional data is a necessity for businesses under the current economic system.

This recording of the formerly ignored, creates a new modus of transgression of subjectivity, which can now be combined with the individual hailing (mostly through advertisement), as soon as a person got identified either through meta-data or by login. »No event, no action, no communication – or non-communication – which wouldn’t be potentially transactionally relevant« (Engemann 2014, 377).

Modernity’s urban anonymity thus implodes into McLuhans’ global village, where every action is known. Village being the operative word – with a slight twist: the information about individual actions does not any more circulate across garden railings or the regular’s table at the local pub and gets eventually forgotten. It now circulates in global machinic networks. The information never fades, but is recorded forever in databases. It is not out in the open, but privatized to be eventually used in the capitalist valuation process.

Marshall McLuhan / Quentin Fiore: The Medium is the Massage. Bantam Books, 1967, p. 12f.

Scattered, decentered subjectivities

»Databases are discourse, in the first instance, because they effect a constitution of the subject. They are a form of writing, of inscribing symbolic traces, that extends the basic principle of writing as différance, as making different and as distancing, differing, putting off to what must be its ultimate realization« (Poster 1995, 80). Since hailings may be adapted by gender, race, ethnicity, class or other categories, database technology is absolutely suited for inscribing the différence and since databases belong to an institution, organization or corporation, their discourse is able to amplify the power of its owner.

The resulting discursive construction of subjectivity »inscribes positionalities of subjects according to its [the databases’] rules of formation« (Poster 1995, 87). Subject constitution in database systems operates in a way, that it »refutes the hegemonic principle of the subject as centered, rational and autonomous« (ibid.) – to me the major point in Posters text. »For now, through the database alone, the subject has been multiplied and decentered, capable of being acted upon by computers at many social locations without the least awareness by the individual concerned yet just as surely as if the individual were present somehow inside the computer.« (ibid.)

This means in turn, that every single database applies a different mode of hailing to the individual it references to, thus constructing a scattered multiplicity of parts of the individual. Neither the owner nor the individual know, which part of subjectivity is saved in a particular database. Therefore it emerges as a decentralized, fragmented, potentially always combinable bio-power, that concerns the subject. And this bio-power is driven by computational capital – the control over resources of computation and transactional data.

The fact, that our bodies are always connected over networks to databases calls for another politics of the body; a body that no longer can hide from the public eye in some private mansion and that no longer leaves production by attending a place called »leisure time«.[4]

Manuela Ott and Gerald Raunig have recently proposed to use the term dividual for better grasping the scattered, fragmented individualities of the information age (Raunig 2015; Ott 2015).[5] The discursive power of database systems lies in their ability to interconnect pieces of information, put them in relation to each other and constantly re-arrange this epistemic arrangement according to a query. Querying thus becomes a dividual practice itself.

Computational Capital

Computational Capital means the disposition about data and computing infrastructure. It aims at generating value in a specific form that is translatable into economical capital.[6] Akin to the medieval merchants’ double entry bookkeeping practice, computational capitals makes use of an epistemological practice – the recording of transactional data and the acting upon the information generated from that data.

This leads to the question, how the hailing with and through databases and their inscribed algorithms that represent human preconceptions interacts with questions of class, gender and ethnicity? Since interpellations can be adapted to gender, ethnicity, class or other categories, database technology is best suited for inscribing différance, and since databases belong to an institution their discourse is able to amplify the power of its owner.

Symbol processing computing machines do not understand meaning. They use algorithms to ascribe meaning and present the outcome to humans, or to machinic entities for further processing. Ranking algorithms and sentiment analysis reach back to the late 1950s attempts of creating computer based information systems.

It is an inherit quality of such algorithms that they weight informational objects, because this is how meaning is ascribed. Bias is systemic to this approach. Everything that has been split off in the process of formalizing information for database inclusion, is not existent and not relevant to the databases contents. This leads to the exclusion of any information that could level out bias in the contained data body.[7]

Regarding the dividual this means that whoever writes the query, looks for a mathematical formulation of deriving différence from a given and limited data set. Since the goal of any query is a result that can lead to an appellation, it is in the nature of querying to formulate it according to a différence – which can be family status, age, sexual interests, credit score and in turn affects political categories as e.g. gender, ethnicity, and class.

Historically the term class has been applied in manifold ways: First, to describe people according to their economical role and abilities in relation to society at a particular historical period. Second, to acknowledge them as sharing a common structural quality. Third, by developing or appropriating a term (Begriff) that allows identification and self-identification.

This term at some point has provided members of the class with the terminological ability to reflect their own situation and act upon it, whatever the outcome has been. A term for the in/dividuals subjected to the regime of data production based on their subjectivity has not yet emerged. Computational capital, if further being developed as a notion for describing the emergence of contemporary social relations, has the potential to address the appropriation of value through transactional data and the unequal distribution of the generated value.

Demystifying databases means, to interpreted them as institutional or organizational tools of hailing, addressing, and agency. Databases and algorithms are not first and foremost technology, they rather represent human ideas about potential (inter)actions. Databases amplify institutional power, since they are able to address the dividual on an individual level, and they do so based on the transactional recordings of former acts of the addressee. A critique of database systems – understood as a set of agency praxes – does not begin with the demand for privacy. It begins with addressing the query and its institutional context, which represents the shaping of an informatory request as a dividual practice.

francis.hunger@irmielin.org, December 2017

Notes

[1] Framework is the Ph.D. research at Bauhaus University Weimar about »The form of the database.«, which deals with the emergence of the relational database model in the United States and Western Europe compared to East Germany and the Eastern Block, 1950–1990.

[2] Information is data brought into formation – on a form, a table, a punchcard, or the hard disk. (Krajewski 2007, 2011)

[3] Already in 1995, Mark Poster recognized the importance of transactional data. »Increasingly economic transactions automatically enter databases and do so with the customer’s assistance. Credit card sales are of course good examples.« (Poster 1995, 86)

[4] Leisure time today has turned into an intensified period of transactional data production when using any kind of electronic networked devices for entertainment media consumption (Facebook, Twitter, Instagram, Youtube, Pornhub, Tinder, Netflix) and other recreation that involves acts which can be electronically recorded.

[5] Information age is a misleading insofar that it implicates that we have left the industrial age behind us, as if it was over and no longer relevant. Partly we can ascribe this ignorance to academic fashion, that prefers the new over the known. This ignorance and the being tired of vulgar-Marxism’s ownership-of-productive-means rhetoric may have led to today’s situation, where information age is discussed as if no industry existed anymore and nothing was produced anymore. Foxcon’s workers can tell differently (Duhigg und Barboza 2012).

[6] The notion here is developed in a similar way to Pierre Bourdieu’s social and cultural capital, which builds on Marx’s notion of capital (Bourdieu 1983; Heinrich 1999). To denote capital related to the economic sphere, and differentiate it from cultural, social and aesthetic capital, the notion »economic capital« is used along Bourdieu, but as reconstructed by Michael Heinrich and Moishe Postone. (Heinrich 1999; Postone 2003)

[7] An shallow look into institutional settings and the distribution of power in tech companies can suggest, how bias is productive discourse. Most of these companies are structured along a predominantly male subjectivity, based in the value set of engineering. It may be more than just speculation, that this leads to inclusions and exclusions on a subconscious level, which in turn reflect on the base of the accumulated data.

Bibliography

Bourdieu, Pierre. 1983. „Ökonomisches Kapital, kulturelles Kapital, soziales Kapital.“ In Soziale Ungleichheiten., herausgegeben von Reinhardt Kreckel, Sonderband 2:S. 183–198. Soziale Welt. Göttingen.

Bryer, R.A. 1993. „Double-Entry Bookkeeping and the Birth of Capitalism – Accounting for the Commercial Revolution in Medieval Northern Italy“. Critical Perspectives on Accounting 4 (2):113–40.

Burkhardt, Marcus. 2015. Digitale Datenbanken – Eine Medientheorie im Zeitalter von Big Data. Bielefeld: transcript. (PDF)

Duhigg, Charles, und David Barboza. 2012. „Apple’s iPad and the Human Costs for Workers in China“. The New York Times, 25. Januar 2012. https://www.nytimes.com/2012/01/26/business/ieconomy-apples-ipad-and-the-human-costs-for-workers-in-china.html.

Engemann, Christoph. 2014. „You cannot not Transact – Big Data und Transaktionalität“. In Big Data., 365–381.

Fischer, Michael J. 2000. „Luca Pacioli on business profits“. Journal of Business Ethics 25 (4):299–312.

Heinrich, Michael. 1999. Die Wissenschaft vom Wert – Die Marxsche Kritik der politischen Ökonomie zwischen wissenschaftlicher Revolution und klassischer Tradition. Münster: Westfälisches Dampfboot.

Krajewski, Markus. 2007. „In Formation – Aufstieg und Fall der Tabelle als Paradigma der Datenverarbeitung“. In Nach Feierabend: Zürcher Jahrbuch für Wissensgeschichte: Datenbanken, 37–55. Diaphanes.

———. 2011. Paper machines – about cards & catalogs, 1548-1929. History and foundations of information science. Cambridge, Mass: MIT Press.

Lauwers, Luc, und Marleen Willekens. 1994. „Five hundred years of bookkeeping – a portrait of Luca Pacioli“. Tijdschrift voor Economie en Management 39 (3):289–304.

Ott, Michaela. 2015. Dividuationen: Theorien der Teilhabe. Berlin: b_books.

Poster, Mark. 1995. The Second Media Age. Cambridge, UK; Cambridge, Mass.: Polity Press ; B. Blackwell. (PDF)

Postone, Michael. 2003. Zeit, Arbeit und gesellschaftliche Herrschaft. Ca Ira.

Raunig, Gerald. 2015. Dividuum. Maschinischer Kapitalismus und molekulare Revolution. Bd. 1. Wien: Transversal Texts.

Advertisements

5 Comments

  1. Hey Francis, Interesting text that provides a really productive genealogical overview of different historical ‘moments’ that lead to the database, and how this database shapes us by constructing subjectivities.

    One of the key ideas here is interpellation, Althusser’s idea that we respond to the hail and become who is hailed – in his classic example, the policeman hails us and we turn, becoming the guilty party. But why do we turn? Or in your case, why do we *allow* our data to be captured, stored, parsed and processed by information infrastructures? Certainly there is a degree of pressure, of disciplinary power by the corporations or states who wield these tools. But we are also complicit and aware – we make Faustian pacts in regards to privacy, we give up data because we get things in return, whether through Facebook’s social connections or Google’s office productivity tools. So why do we turn? Judith Butler in her critique of Althusser essentially says, because we have a ‘willingness to turn’. And so it could be helpful to work desire back in – to rehabilitate the agency of the User, as a nice corrective to ideas of the panoptic, the oppressed, the subjectivated. 🙂

    I thought your idea of the Query was really strong. I’d be interested to see how, for example, a MySQL query was structured, and how this form of writing interrogates the stored form of writing of the database. What does the Query recognize as valid discourse? And what gaps of knowledge are opened up by its failures, or incomplete results?

    Like

    1. Hi Luke,

      regarding »the query« Krajewski’s texts may give a perspective on how formation becomes in-formation, which is important, because that’s where the structure before the query already takes place. Maybe I should stress that more.

      What a relational query is able to, is to interconnect data from different sets of information according to distinct, unique criteria (e.g. an ID-number). The resulting set of information only exists as long as the query is asked. This ad-hoc-ness is maybe the most interesting feature. In SQL (appr. 70% of todays databases in use) the query can only address, what has been saved in a specific field. If an information domain “sex” is designed to contain either the values “male” or “female”, then to the reality of the database system there is only male, female or nothing. So you have these distinct notions, and all grey areas in between get not addressable unless included into the database model.
      The thing with the Big Data approach is, to record literally everything and see if we humans have developed or will develop algorithms (a set of mathematical-logical instructions) to harvest in-formation. While in SQL – relational algebra – the structure is brought upon data in the process of entering data into the database often via the metaphor of a table or a form, in Big Data structure is applied post-recordem using more complex algorithms and comparing the recorded set of unstructured data to a set of statistical, that is structured and discoursified, data.

      So in SQL/relational model it is inscribed into the model of the database, what the query recognizes as valid discourse, and in Big Data the query + comparision with specifically prepared, structured statisitical data defines, what is recognized as valid discourse.

      Best Francis

      Like

  2. Hi Luke,

    regarding the interpellation and the question, why do we turn? Mark Poster has discussed this in much more detail than it was possible in this blog post, so I have uploaded his text (link in my bibliography).
    For instance Poster suggests, that the whole situation is get’s more complicated, because there are many authors to a database system, so that it is more or less »authorless«. Shifting his point, I argued that there are identifiably »policemen« (administrators, managers, engineers, user interface designers, and politicians) in institutions hailing the guilty party. With Hito Steyerl one can note that the database system functions as a »proxy« in this hailing and with Poster, Ott and Raunig we can note, that the guilty party is some scattered subjectivity and the fragmented, dividual parts of a subject get hailed.
    I do not ask any more, »why do we give up privacy«, I rather take it as a given fact. Because – so my argument goes – it is not about privacy, but about a new mode of production, which is the machinic calculatory capitalization on human desire(s). Where in industrial period the human desire to produce material goods was mechanized (maschinisiert), today it is the human desire to communicate that gets mechanized.
    So in my view the agency of the user can only return, if they understand that they produce. Produce data. Produce transactional data.

    I think I can not upload all the source texts mentioned here, drop me an email so that I can personally give them to you.

    Best, Francis.

    Like

  3. Hi Francis

    I really like Mark Poster – glad you have un-overlooked him here. Your discussion of the database and authorship / non-authorship is fascinating here and really relevant to some things I’ve been doing too about the author/reader in a Barthean sense, so will be useful for me and interesting to discuss in Berlin. Also really interested in what you say about the dividuality and subjectivity of the database.

    I think I understand what you mean by surveillance being a misleading term to describe data collection (in terms of the way the term is popularly framed), but thinking about production and extraction as alternative descriptions that produce computational capital rather than surveillance. .. can we not think of the misuse of production and extraction of data as a kind of surveillance? Is that not an important by/co-product of the digital connected database? Just some thoughts… I enjoyed your paper very much,

    Pip

    Like

  4. Hi Francis,

    You make a strong case for the raison d’être of databases and you provide, indeed, some evidence to support your idea that databases are institutional tools by default. “Demystifying databases means, to interpreted them as institutional or organizational tools” – that was a strong statement and you do do that.

    Regarding surveillance, this point attracted our attention, too, as Pip’s and Luke’s, who notes that we are aware. Your argument that there is surveillance but it’s only a fraction of the activity around data is surely fair but just to add to all the previous comments, you sort of imply that only surveillance is discussed, whereas data extraction is overlooked. Not sure either if there are any subcultures where although you encounter the discourse around surveillance, you don’t encounter the discourse around data extraction.

    On the other hand, you might be correct, it’s definitely a term that fits better than exploitation in the public discourse. First of all, as you say it’s more ego-centric than the later. But it might also have to do with the psychology of the mass; the term surveillance implies that if you have nothing to hide, at the end of the day, you can just let it be. Whereas the term exploitation feels a bit less comfortable; it urges you to act with a leftist agenda that doesn’t seem to be very popular in the public discourse right now.

    See you in Berlin,
    Dionysia and Panagiotis

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s