This subproject turns to generative literature – literature that is produced with the help of algorithms –, and strives to cover its range from subalternity to co-creation. In so doing, it raises questions of authorship, creativity, and control in artistic human-machine interaction.
The subproject will focus on the history and practice of algorithm-aided literary writing. It assumes the dominance of linear algorithms for the production of texts since 1960. Such algorithms are based on formalized rules, over which the author has, in principle, complete control, and which find predecessors in aleatory, permutative, and combinatorial techniques from the baroque to the historical avant-gardes. Since around 2010, connectionist algorithms, or neural networks, have emerged as a second paradigm of generative literature that follows a different logic. They remain Black Boxes, have to trained by example in a bottom-up process instead of being programmed top-down, and thus yield much less control to the author.
In both cases, knowledge production takes places as an interaction between human and machine, but the degree of involvement, insight, and authorial hierarchy differ considerably. The subproject investigates the impact the underlying technology has on the poetics of each paradigm, ranging from products of pure chance to controlled serendipity to the execution of premeditated concepts. It assumes that authorship in generative literature is always distributed, and asks for the model of authorial control suggested in these paradigms, ranging from primary (manual writing) to secondary (programming) to tertiary (training) or, with models like GPT-3, even quaternary (generating text merely by giving examples). Finally, the subproject asks what conception of artistic processes and creativity such distanced and distributed models of writing imply.
The subproject Literary Assistance will result in a monograph with the title Poetic Algorithms, a media-historical genealogy of algorithmic approaches to the production of literary texts. While a first preparatory part of the book will delineate historical methods of creating literary texts using externalized techniques (e.g. aleatory or rule-based) and materialities (e.g. slide wheels or paper slicings), the main parts deal with specifically algorithmic approaches from the 1950s to the present, covering both traditional, linear algorithms as well as modern machine learning techniques and their technological underpinnings (including works by Nanni Balestrini, Allison Knowles, Marc Adrian, Allison Parrish et al.). Importantly, the book will employ the new methodology of source code critique for ›poetic algorithms‹ and include a close reading of the actual programs used in works discussed as well as a plausible reverse engineering of those algorithms that are no longer extant. Besides discussing questions of intentionality and authorship that result from such assisted, detached literary practices, the book will focus particularly on the interplay between technological substrates and poetological and mimetic conceptualizations.
Hannes Bajohr is an expert in training AI networks. He is a philosopher, literary scholar, and author working on the theory and practice of generative literature, media philosophy, and twentieth-century German intellectual history [@bajohr:2016; @bajohr:2018; @0x0a:2020]. He received his doctorate from Columbia University in 2017 with a dissertation on Hans Blumenberg’s theory of language, and is co-editor of the Hans Blumenberg Reader (Ithaca: Cornell University Press, 2020). He combines theoretical and practical approaches in his work on digital and generative literature. He has written extensively on digital and post-digital art and literature, and is the editor of Code und Konzept: Literatur und das Digitale (Berlin: Frohmann, 2016), a volume on the confluence of digital and conceptual writing practices. As a practitioner of generative writing, he has published works of digital literature in the context of the text collective 0x0a – most recently Poetisch denken (Berlin: Frohmann, 2020), poems generated by a neural network – and under his own name, such as Halbzeug (Berlin: Suhrkamp, 2018), which uses algorithmic techniques, glitch-writing, and the misappropriation of digital tools for literary ends.
As part of the research team he will work on the monograph Poetic Algorithms while focusing on the subproject Literary Assistance.