Reading and Evaluating Code in Literary Scholarship
According to various scholars (e.g. Burgess and Hamming 2011, and Clement 2016) there is a dichotomy between on the one hand a ‘pure’ intellectual realm associated with scholarly writing and academic print publication, and on the other hand the ‘material labor’ associated with the performance of the pure intellectual realm, for example, instrument making or programing. On closer inspection such a dichotomy turns out to be largely artificial. For their argument Burgess, Hamming, and Clement refer to the earlier work of Bruno Latour (1993) who casts the defining characteristic of modernity as a process of ‘purification’ aiming to contrast the human culture of modernity to nature. Burgess and Hamming observe a congruent process in academia: “Within the academy we see these processes of purification and mediation at work, producing and maintaining the distinction between intellectual labor and material labor, both of which are essential to multimedia production” (Burgess and Hamming 2011:¶11). This process serves to distinguish between scholarly and non-scholarly activities: “The distinction between intellectual and material labor is pervasive throughout scholarly criticism and evaluation of media forms. […] In addition, any discussion of scholarly activities in multimedia formats are usually elided in favor of literary texts, which can be safely analyzed using traditional tools of critical analysis” (Burgess and Hamming 2011:¶12). However, as Burgess and Hamming note, this distinction is based upon a technological fallacy already pointed out by Richard Grusin in 1994. Grusin argued that Hypertext has not changed the essential nature of text, as writing has always already been hypertextual through the use of indices, notes, annotations, and intertextual references. To assume that the technology of Hypertext has revolutionarily unveiled or activated the associative nature of text, amounts to the fallacy of ascribing the associative agency of cognition to the technology, which however is of course a ‘mere’ expression of that agency.
To assume an intellectual dichotomy between scholarly publication resulting from writing versus code resulting from programming, is a similar technological fallacy. To assert that scholarship is somehow bound to print publication exclusively is a akin to the ascribing of agency to the technology of written text, because such an understanding of scholarship presupposes that something is scholarship because it is in writing, that writing makes it scholarship. But obviously publication is a function of scholarship and scholarship is not a function of publication, because scholarship does not arise from publication but is ‘merely’ expressed through it.
If scholarship expresses anything through publication it is argument, which is, much more than writing, an essential property of scholarship. But in essence it does not matter how, or by which form, the argument is made—whether it is made through numbers, pictures, symbols, words, or objects. Those are all technologies that enable us to shape and express an argument. This is not to say that technologies are mere inert and neutral epistemological tools, different technologies shape and affect argument in different ways. Technological choices do matter, and different technologies can enrich scholarly argument. Producing an argument requires some expressive technology, and the knowledge and ability to wield that technology effectively, which in the case of writing is called ‘literacy’. As Alan Kay (1993) observed, literacy is not just gaining a fluency in technical skills of reading and writing. It also requires a “fluency in higher level ideas and concepts and how these can be combined” (Kay 1993:83). This fluency is both structural and semantic. In the case of writing as technology it is for instance about sentence structure, semantic cohesion between sentences, and about how to express larger ideas by connecting paragraphs and documents. These elements of literacy translate to the realm of coding and computing (Vee 2013) where fluency is about the syntax of statements and how to express concepts for instance as object classes, methods and functions that call upon other programs and data structures to control the flow of computation. Text and writing may still be the most celebrated semiotic technologies to express an argument, but computer code understood as ‘just another’ literacy (cf. Knuth 1984, Kittler 1993, Vee 2013) means it can thus equally be a medium of scholarly argument. We start from this assertion that coding and code—as the source code of computer programs that is readable to humans and which drives the performative nature of software (Ford 2015, Hiller 2015)—can be inherent parts of scholarship or even scholarship by itself. That is: we assert that code can be scholarly, that coding can be scholarship, and that there is little difference between the authorship of code or text (Van Zundert 2016).
There are two compelling reasons why code should be of interest to scholars. Much has been written about the dramatic increase of software, code, and digital objects in society and culture over the last decades often with a lamenting or dystopian view (Morozov 2013, Bostrom 2016). But aside from doomsday prognostications there is ample evidence that society and its artifacts are increasingly also made up of a ‘digital fabric’ (Jones 2014, Berry 2014, Manovich 2013). This means that the object of study of humanities scholars is also changing—literature, texts, movies, games, music increasingly exist as digital data created through software (and thus code). This different fabric is also branching off cultural objects with different and new properties, for instance in the case of electronic literature and storytelling in general (Murray 2016). It is thus crucial for scholars studying these new forms of humanistic artifacts to have an understanding of how to read code and understand the computational processes it represents. Furthermore, as code and software are increasingly part of the technologies that humanities scholars employ to examine their sources—examples of this abound (e.g. Van Dalen & Van Zundert 2007; Tangherlini 2012; Piper 2015; Kestemont, Moens, and Deploige 2015; etc.) understanding the workings of code is therefore becoming a prerequisite for a solid methodological footing in the humanities.
Over the last decade code and its creation have become substantial parts of the objects and methods of study within the humanities (cf. for instance Underwood and Sellers 2015; Ramsay 2011). Coding and code now represent the material and intellectual labour of scholarship, and as such constitutes an important part of scholarly epistemology. Conventional epistemological instruments—such as historiography, close reading, deconstruction and hermeneutics—that constitute large or important parts of analysis have the property by which their expressions as writing are also their scholarly publications. In effect most scholarly articles are de facto inscriptions of such epistemological tools: they inscribe the argument a humanist shaped using this particular abductive style of reasoning, which is quite indigenous to the humanities. Afterwards these inscriptions are offered to other scholars for scrutiny and evaluation, making the research accountable through peer review. Confusingly, in the humanities analysis and inscription are often one and the same: the argument is shaped by creating text, and the reporting is a polished version of the argument inscribed as text. This is not to say there is not a significant amount of writing and intellectual/material labor that doesn’t make the “final cut” so to speak, but rather we want to highlight the material expression of this labor is of the same kind.
As an epistemological instrument, code has the interesting property of representing both the intellectual and material labor of scholarly argument in computational research. Code affords method not as a prosaic, descriptive abstraction, but as the actual, executable inscription of methodology. However, the code underpinning the methodological elements of the discourse are themselves not presented as elements in the discourse. Their status is akin to how data and facts are colloquially perceived, as givens, objective and neutral pieces of information or observable objects. But like data (Gitelman 2013) and facts (Betti 2015), code is unlikely to be ever ‘clean’, ‘unbiased’, or ‘neutral’ (cf. also Berry 2014). Code is the result of a particular literacy (cf. for instance Knuth 1984, Kittler 1993, Vee 2013) that encompasses the skills to read and write code, create and relate larger code constructs and code objects and to express concepts, ideas, and argument. And like text code can be wielded to gain a certain intended effect or have unintended side effects (cf. e.g. McPherson 2012). In other words: code has rhetorics. Therefore, what Joanna Drucker (2011) holds about data—that rather than ‘given’ data should be called ‘capta’, i.e. ‘that which was taken’—is the more true for code: it is a symbolic system with its own rhetoric and cultural embeddedness (Marino 2006) and latent agency (Van Zundert 2016). Thus, rather than accepting code and its workings as an unproblematic expression of a mathematically neutral or abstract mechanism, it should be regarded as a first-order part of a discourse that should be under peer scrutiny as a whole. If code and coding are substantial constituting parts of the argument, then it would be quite extraordinary to not subject these to the same forms of scrutiny and accountability. Yet code hardly ever is regarded as scholarly output, and if it is published at all it is seldom or never evaluated (cf. Schreibman, Mandell, and Olsen 2011).
However, the acceptance of code as another form of scholarly argument presents problems to the current scholarly process of evaluation because of a lack of well-developed methods for reading, reviewing, and critiquing scholarly code. Digital humanities as a site of production of non-conventional research outputs—digital editions, web-based publications, new analytical methods, and computational tools for instance—has spurred the debate on evaluative practices in the humanities, exactly because practitioners of digital scholarship acknowledge that much of the relevant scholarship is not expressed in the form of traditional scholarly output. Yet the focus of review generally remains on “the fiction of ‘final outputs’ in digital scholarship” (Nowviskie 2011), on old form peer review (Antonijevic 2016), and on approximating equivalencies of digital content and traditional print publication (Presner 2012). Discussions around the evaluation of digital scholarship have thus “tended to focus primarily on establishing digital work as equivalent to print publications to make it count instead of considering how digital scholarship might transform knowledge practices” (Purdy and Walker 2010:178; Anderson and McPherson, 2011). As a reaction, digital scholars have stressed how peer review of digital scholarship should foremost consider how digital scholarship is different from conventional scholarship. They argue that review should be focussed on the process of developing, building, and knowledge creation (Nowviskie 2011), on the contrast and overlap between the representationality of conventional scholarship and the strong performative aspects of digital scholarship (Burgess and Hamming 2011), and on the specific medium of digital scholarship (Rockwell 2011).
Where the debate on peer review of digital output in digital scholarship might have propelled a discourse on reading code and code literacy as an epistemological technology of scholarship it has rather been geared to high level evaluation, concentrating for instance on the issue how digital scholarship could be reviewed in the context of promotion and tenure track evaluations. Very little has been proposed as to concrete techniques and methods for the practical review of program code.1 Existing guidance pertains to digital objects such as digital editions (Sahle and Vogler 2014) or to code as cultural artefact (Marino 2006), but no substantial work has been put forward on how to read or review scholarly code. We are left with the rather general statement that “traditional humanities standards need to be part of the mix, [but] the domain is too different for them to be applied without considerable adaptation” (Smithies 2012), and the often echoed contention that digital artefacts should be evaluated–in silico–as they are and not as to how they manifest in conventional publications. The latter argument probably most succinctly put by Geoffrey Rockwell (2011): “While such narratives are useful to evaluators […] they should never be a substitute for review of the work in the form it was produced in.”
-
The situation is different in the sciences, where more concrete experiments with code review are found. For instance the Journal of Open Source Software (http://joss.theoj.org/about) is attempting to alleviate these challenges by creating a platform for the submission, review, and validation of scientific code and software. ↩