Reading Computer Generated Texts
Reading Computer Generated Texts
Reading Computer-
HENRICKSON
computers produce output in readable human languages. Such
output takes many forms, including news articles, sports reports,
prose fiction, and poetry. These computer-generated texts are
often indistinguishable from human-written texts, and they
are increasingly prevalent. NLG is here, and it is everywhere.
Generated Texts
However, readers are often unaware that what they are reading
has been computer-generated. This Element considers how
NLG conforms to and confronts traditional understandings
of authorship and what it means to be a reader. It argues that
conventional conceptions of authorship, as well as of reader
responsibility, change in instances of NLG. What is the social
Leah Henrickson
ISSN 2514-8516 (print)
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
Elements in Publishing and Book Culture
edited by
Samantha Rayner
University College London
Leah Tether
University of Bristol
READING
COMPUTER-GENERATED
TEXTS
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
Leah Henrickson
University of Leeds
University Printing House, Cambridge CB2 8BS, United Kingdom
One Liberty Plaza, 20th Floor, New York, NY 10006, USA
477 Williamstown Road, Port Melbourne, VIC 3207, Australia
314–321, 3rd Floor, Plot 3, Splendor Forum, Jasola District Centre,
New Delhi – 110025, India
79 Anson Road, #06–04/06, Singapore 079906
www.cambridge.org
Information on this title: www.cambridge.org/9781108822862
DOI: 10.1017/9781108906463
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
DOI: 10.1017/9781108906463
First published online: January 2021
Leah Henrickson
University of Leeds
Author for correspondence: Leah Henrickson, L.R.Henrickson@leeds.ac.uk
1 Introduction 1
5 Conclusion 70
References 77
Reading Computer-Generated Texts 1
1 Introduction
Every morning, Simon walks down the street to his local café in
Wolverhampton. He orders a medium black filter coffee, which he nurses
as he flips through the Express & Star, an independent regional newspaper.
First, the cover story. Then, the less pressing items. The newspaper’s
contributors are good at what they do, tending towards fair representation
of issues and citing relevant supporting data. Today, a story entitled
‘Majority of New Mothers in Wolverhampton Are Unmarried’ catches
Simon’s eye. He reads the story’s introduction: ‘The latest figures reveal
that 56.5 per cent of the 3,476 babies born across the area in 2016 have
parents who were not married or in a civil partnership when the birth was
registered. That’s a slight increase on the previous year.’ This is a sensitive
issue, thinks Simon. Well, it takes a special kind of journalist to consider such
a subject so objectively. Simon continues reading the article, which cites
figures and statements courtesy of the Office for National Statistics. It is not
until he reaches the end of the text that Simon reads the following statement:
‘This article has been computer-generated by Urbs Media, crafting stories
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
(Urbs Media, n.d.), but it does not accompany any of their articles.
Moreover, Simon’s conflicted feelings have been paraphrased from
a series of focus groups conducted to discern readers’ emotional responses
to the idea of computer-generated texts (Henrickson, 2019c). While some
may regard a computer’s ability to generate intelligible narratives as being
contained within the realm of science fiction, the technology that enables
a computer to generate cogent prose has been in development for more
than half a century. Computer scientists have long been engaged with
programming computers to generate texts that are indistinguishable from
those written by humans. Now we have reached a point when there are
systems generating texts that we may read as part of our daily routines,
unaware of their being computer-generated. The production of data-
driven sport, weather, election, and business intelligence reports has
been assigned to computers capable of producing these texts at a rate
incomparable to that of humans, and on personalised scales that could
hardly be considered efficient uses of time for paid human labour. Yet
when we read these texts, we assume that they are social products, the
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
about a technology that helped pave the way for what we now call NLG.
NLG enjoyed a surge of interest within the academic community
throughout the 1970s to 1990s. This interest also permeated the popular
sphere. In 1986, Arthur C. Clarke (2000) published a fictional short story
about ‘The Steam-Powered Word Processor’, which Reverend Charles
Cabbage uses to mindlessly produce his sermons. Even earlier, in 1954,
Roald Dahl wrote about ‘The Great Automatic Grammatisator’: a novel-
producing machine that one Adolph Knipe uses to render human writers
obsolete. Despite such widespread interest, though, NLG research has only
recently begun engendering distinct intellectual traditions, especially
related to developmental approaches. As David McDonald (1986: 12)
suggests, this may be a result of the individualised nature of systems.
Indeed, few NLG systems comprising the field’s lineage remain. Written
in programming languages now extinct, or saved in digital formats that have
deteriorated or disappeared altogether, many of these systems survive only
through secondary literature. Further, developers tend not to build upon
1
Thanks to James Ryan for his digital curation of these materials.
8 Publishing and Book Culture
work that has already been done, perhaps for reasons of inadequate digital
preservation and/or claims for intellectual property. The field is in disarray,
with no comprehensive analysis of NLG output reception ever having been
published. As a result, we do not know where computer-generated texts fit
within our current conceptions of authorship and reading. This section
offers the necessary technical context for succeeding sections, which exam-
ine where computer-generated texts fit within conventional understandings
of authorship and what it means to be a reader as per the hermeneutic
contract.
The student must instead direct the teacher to ‘grip the knife’s handle with
your right hand’, and then to ‘move your right hand, still gripping the knife,
up approximately seven inches’, and so on. Each of these instructions
represents an algorithm. Recipe analogies are commonly used when clar-
ifying what an algorithm is. One extra teaspoon of baking powder can
collapse a cake.
This Element, however, refers to NLG systems rather than programs.
This is because generated output more often results from a series of
programs than from a single program. Each program informs the next
program’s functionality, and together these programs make a system.
But all aspects of a system may not remain static. There are, after all,
numerous problems that cannot be solved with fixed input data, instead
necessitating an ever-growing and/or unstable corpus. In these instances,
machine learning algorithms are more suitable. Search engines employ
learning algorithms to navigate the Web and provide more relevant results
in a digital landscape constantly altered by the daily emergence of countless
new websites. News platforms employ learning algorithms to summarise
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
current events by drawing from myriad articles and social media posts.
A machine learning system has by necessity achieved a sort of autopoiesis,
capable of maintaining itself through a network of processes that ensures the
system’s continuous production, maintenance, and improvement.
A machine learning system could be viewed as a small technological
ecosystem, with limited autonomy within its specified domain. As its
algorithms evolve in response to a sort of lived experience, the system
becomes increasingly independent of its creator.
Given the semantic, grammatical, and syntactic complexities of natural
language, inputting all of a language’s rules (and exceptions) into an
NLG system would be a daunting task for even the most skilled linguist.
What is more, by the time all the rules had been inputted, the linguistic
landscape would have changed. For these reasons, many new NLG
systems employ unsupervised machine learning, which applies inductive
logic associated with unlabelled data by using clustering and association
techniques to detect patterns that humans might overlook. Rather than
teach a system a language, a developer may instead teach the system how
to learn language, thereby allowing it to update its vocabulary, grammar,
10 Publishing and Book Culture
claw from an arcade grabber machine), a box, and several blocks and
pyramids of different sizes and colours. It executed commands, answered
questions, and prompted users to provide further information if needed.
SHRDLU was significant not only because it could interact with its users by
processing English-language input and responding appropriately, but also
because it had a memory of sorts. For example, the system could remember
that Winograd had defined a ‘steeple’ as a structure that contains two green
cubes and a pyramid (Winograd, 1971: 52–5). SHRDLU could also cor-
rectly answer questions about actions it had executed in the past, demon-
strating a primitive – however mechanical – lived experience.
While Winograd did not explicitly intend SHRDLU to be a work of
interactive fiction, Professor of Digital Media Nick Montfort deems
SHRDLU ‘the first work with all the formal elements of interactive fiction.’
According to Montfort (2005: 85), SHRDLU ‘allowed for more interesting
potential narratives, simulated spaces, and challenges to later be integrated
with the sort of structure it exemplified. By augmenting SHRDLU’s parser
and world model (actually, a far simplified version of these) in this way,
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
are laden with human values, and are all human-controlled, albeit within
varying contexts of design and power. All were programmed to generate
output that resembled typical forms of human-written texts and/or mimicked
human writers’ cognitive processes during text production: namely, those
processes associated with problem solving and creativity. Some systems were
developed to facilitate reflection upon human creativity through the system-
atisation of cognitive processes associated with acts of creation; others were
developed to model human cognition, revealing that human thought pro-
cesses associated with acts of writing are too complex to mechanise entirely,
but seemingly straightforward enough to justify attempts to do so anyway.
The only one of these systems developed outside of an academic context –
Racter – emerged as an exploration of novelty, an ode to literary wackiness.
Whether for experimentation or entertainment, all of these systems have
served as means of exploring the theoretical and practical facets of creativity
as it is applied to authorship of cohesive narratives. They were born out of the
interconnectedness of humans and their technologies, and driven by curiosity
about computational competence as embodied by NLG, conscious or coded.
Reading Computer-Generated Texts 15
mined rules that reflect communicative and aesthetic intentions. As per the
Oulipo group’s extended name – Ouvroir de littérature potentielle (work-
shop of potential literature) – potentialities of language and meaning are
explored through the conscious application of constraint. An Oulipian work
is produced less from the intention to convey a particular message and more
from the intention to discover new worlds of syntactic and semantic
possibility by applying strict linguistic constraints. Jean Lescure’s (1973)
N+7 (S+7 in French) procedure, for example, replaces every noun in
a human-written text with the seventh one listed after it in a dictionary.
In this way, the human-written text is altered to create a new text using the
same syntactic formula but resulting in a different semantic meaning.
Readers are tasked with interpreting Oulipian texts that may seem non-
sensical at first glance, with translating their gibberish into significance. For
a more playful alternative, one need only look to the popular fill-in-the-
blanks word game Mad Libs. In Mad Libs, players complete a story template
that they themselves cannot read by suggesting words with specified
narrative functions (e.g. noun, verb). The story is then read aloud, with
16 Publishing and Book Culture
2.5 Conclusion
And here we find ourselves, following a history of mechanised text produc-
tion beginning in 1845, but still uncertain as to where this technology fits
within our modern age. Just as with the Eureka’s spectators, modern readers
are usually offered little more than an NLG system’s surface realisations.
A system’s constraints, determined by human developers, are indistinguish-
able, hidden within a black box of code that may never be shared, perhaps
due to intellectual property protection or patent limitations. Developers
may therefore capitalise on humans’ long-standing fascination with
mechanised text production, as well as on each individual user’s instinct
to fill in a text’s conceptual gaps. Development continues, but we still do not
Reading Computer-Generated Texts 17
adhere to highly formulaic templates. These are often market reports and
reference materials, generated to meet the needs of underserved commu-
nities such as those with niche interests, or even those whose languages have
a limited number of native speakers. Parker’s patent application (‘Method
and Apparatus for Automated Authoring and Marketing’ (US 7266767 B2),
2006) touts his method of text production as a means for combatting the
financial and labour pressures of traditional publishing, such as the costs and
management of authors and editors. The patent application also notes the
method as preventing human error throughout production, and boasts
greater economic profitability than traditional human authorship by using
extant print-on-demand systems, and by eliminating the need to employ
humans to write highly formulaic and repetitive texts.
Parker’s generated texts demonstrate the potential to fulfil niche
demands for which mass production would not be economically viable.
Parker’s generation system has been administered through ICON Group
International (n.d.), which sells such specialised global market research
reports as the 176-page 2007–2012 Outlook for Instant Chocolate Milk,
Reading Computer-Generated Texts 19
Weight Control Products, Whole Milk Powder, Malted Milk Powder, and
Other Dry Milk Products Shipped in Consumer Packages Weighing 3 Pounds or
Less Excluding Nonfat Dry Milk and Infants’ Formula in Japan, for £495
(Parker, 2006b). ICON’s World Outlook series is data-driven, blending
textual templates with automatically generated analytical tables visualising
industry trends. For text-heavier content, ICON has published a Webster’s
series of reference books including the 116-page Webster’s Swedish to
English Crossword Puzzles: Level 1 for £14.95 (Parker, 2007) and other
crossword books to facilitate language learning, in addition to books
comprising historical timelines and compilations of quotations, facts, and
phrases. Most of these books are not just printed on demand – they are
generated on demand.
In 2008, Parker’s method of text production was brought to public
attention through articles in news outlets like The Guardian (Abrahams,
2008) and The New York Times (Cohen, 2008). Articles about Parker’s
books have continued to be written since, each expressing the same sense of
confusion as those initial 2008 articles: ‘Meet the Robots Writing Your
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
the technology. One Amazon reviewer (Downey, 2015) for Parker’s com-
puter-generated The Official Patient’s Sourcebook on Acne Rosacea (2002)
warns:
insufficient content rather than its production process. The former review
emphasises production process over textual quality, exemplifying the sym-
bolic value attributed to originality expected from human authorship. The
inextricable relationship between content and form associates the printed
word with the human-authored text, and Parker’s decision to make his
generated texts available in printed form evidently affronts the extant
human-driven institutions that determine and distribute all that is fit for
print. Acne Rosacea, alongside Parker’s other generated books, challenges
understandings of the printed word as a fundamentally human artefact, an
expression of human intelligence, a product of an author with communica-
tive intention.
authority radically different from those of the modern age. The medieval
writer was a systematic plagiarist of sorts, copying from a variety of
source texts to compose his own (Goldschmidt, 1943: 88–93; 113).
Medieval scholars showed little regard for the individual identities of
their books’ writers, focusing instead on the ancient truths the books
held. The author was not an individual creative genius, but a messenger,
a copyist with creative licence. Certainly, there were exceptions (Saenger,
1999: 137). On a societal scale, though, medieval writers produced
content according to a markedly different set of readerly expectations
for content originality.
Printing technology transformed the authorial landscape, and the work
of E. Ph. Goldschmidt (1943), Elizabeth Eisenstein (1979), and Lucien
Febvre and Henri-Jean Martin (1997) (to list only a few) all support the
theory that print facilitated a shift away from anonymity and towards the
‘cults of personality’ (Eisenstein, 1979) that have come to characterise
modern-day print culture. Of course, authorship did not develop wholly
as a response to print; modern understandings of the author as an individual
creative genius began to emerge as early as the shift from orality to literacy
22 Publishing and Book Culture
(Havelock, 1980: 98). Further, cultural developments like the rise of liberal-
ism and increasing privatisation also contributed to the movement away
from anonymity and towards cults of personality, as well as these cults’
resultant senses of textual authority. Eisenstein observes that ‘the veritable
explosion of “creative acts” during the seventeenth century – the so-called
“century of genius” – can be explained partly by the great increase in
possible permutations and combinations of ideas’ made possible by cross-
cultural interchanges and ‘increased [textual] output directed at relatively
stable markets’: both factors promoted by the proliferation of print
(Eisenstein, 1979: 75). In such views, print both influenced and was influ-
enced by changing conceptions of individuality and authorship. Through its
gradual crystallisation, print culture supported a shift in the cultural mindset
wherein cults of personality that praised individual genius became common-
place. The technological developments related to print had social conse-
quences: namely, the veneration of the author.
The growth of the printing trade contributed to more definitive demar-
cations of occupational roles. The publisher and printer, for example,
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
became two people rather than one, each with defined duties. One of the
most renowned models delineating printing occupations is Robert
Darnton’s Communications Circuit. Darnton (1982: 68) begins his Circuit
with the Author and the Publisher, connected with a bidirectional arrow.
The Circuit then moves consecutively to Printers (with a nod to Suppliers),
then Shippers, Booksellers, and Readers (and Binders). The Circuit closes
with a tentative unidirectional connection between Readers and Author.
Contained within the Circuit are the pressures of the ‘economic and social
conjuncture’, ‘intellectual influences and publicity’, and ‘political and legal
sanctions’. In this model, Darnton presents a rigid division of labour that
emphasises a standard materiality of printed texts produced for profit. Such
roles continue to broadly characterise today’s book trade, upholding the
symbolic value attributed to books by comprising numerous gatekeeper
functions for all that is fit for print. Although Darnton designed his
Communications Circuit for his own studies of eighteenth-century
France, book historians have since applied it liberally to a range of analogue
and digital contexts from the past and present day (e.g. Ray Murray and
Squires, 2013; van der Weel, 2001).
Reading Computer-Generated Texts 23
for [the] analytical and scientific thinking’ that defines modern rationality (van
der Weel, 2011a: 85). The ingrained notion of printed texts as expository and
linear, requiring mental concentration and patience, seeps into the human
psyche more generally, contributing to what Adriaan van der Weel has dubbed
the Order of the Book.
As van der Weel (2011a: 91) observes, our modern conception of
democracy, mainly representative democracy, rests on the assumptions
that all participating individuals (1) can access informative texts and (2)
can read those texts. The Order of the Book assigns high symbolic value to
books (especially in codex form), and to the printed word specifically,
because the texts contained within these books direct the operation of social
institutions (e.g. governmental policies) and contribute to the formation of
shared social consciousness. Although much social interaction currently
occurs in digital forms, these forms are commonly regarded as ephemeral,
and important digital texts are often printed out for future reference or
safekeeping. Printed texts also perpetuate cultural and literary heritage
through a ‘pastness of the past’ (Goody and Watt, 1963: 311) established
Reading Computer-Generated Texts 25
demand for printed texts and subsequently increasing both the number of
titles available and the size of print runs. There were, to be sure, other
reasons for rising literacy rates: public education initiatives, increased access
to circulating libraries, the establishment of coffee house culture, and
heightened dependency upon textual means for distribution of civic notices,
to list just a few (Cowan, 2012; Houston, 1993: 374–80). The demand for
texts was both initiated and fulfilled by the production capacity of con-
temporary printing technology; the increased presence of texts heightened
the demand for more. From the interplay of technological development and
socioeconomic circumstances, mass readerships gradually emerged to con-
sume the texts of individual authors who were granted cultural authority by
virtue of their occupation. The modern conception of the author as an
individual in an occupational role was formalised, leading to a new means
for personal financial gain: literary celebrity.
The proliferation of digital media has nuanced the conventional under-
standing of the author as literary celebrity and cultural authority. Matthew
Kirschenbaum, for one, considers the traditional author role not as dis-
appearing, but as being supplemented by an ‘@uthor’ role. Writing in the
Reading Computer-Generated Texts 29
argues, ‘the author would be a fiction of the texts. This will be our final
point, but it will remain unresolved.’ In Balpe’s view, the computer-
generated text ‘destabilises’ the reader because its words ‘are emitted by
an inaccessible, superiorly reassuring mind’ rather than by an embodied
author with whom one could in principle debate a text’s meaning (Balpe,
1995: 29). The computer-generated text stimulates readers to consider their
acts of reading and meaning-making, to acknowledge and adapt their own
senses of reader responsibility. The reader’s relationship with the author –
the hermeneutic contract – is at the forefront of such consideration.
author to complete (see Skains, 2019 for more examples). Indeed, occu-
pational roles associated with print become blurred and altogether indis-
tinguishable in the consideration of a system that cannot easily develop
a distinct institutional order. Philip Parker’s patented algorithms are
combined to generate, market, and distribute books, but one cannot
readily ascertain precisely which part of Parker’s system does what, or
how involved Parker himself has been in the production of any one book.
In an increasingly digital world, the author continues to assert cultural
authority, but the kind of cultural authority asserted has changed.
Through digital media that permit online exchange, the author appears
more embodied than ever before, a personal figure who can respond
directly to readers through social media platforms and who may exist
outside of her own texts or even the literary realm altogether (e.g. as
a political figure or an actor). At the same time, the author – masked by
the character she performs in an appeal to literary celebrity, or perhaps
replaced by a patented set of algorithms – remains a largely imagined
figure, further disembodied from texts she has produced not only by the
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
3.6 Hyper-Individualism
This elevated role of the reader affirms a greater societal shift towards what
I identify here as ‘hyper-individualism’. Hyper(-)individualism as a term
has already been used intermittently in discussions of psychology, media,
and culture (Gauchet, 2000; Lake, 2017). The term as it is used here, though,
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
value attributed to human authorship, but also the companies who specialise
in wide-ranging applications of algorithmic authorship. In one study of
numerous international organisations that use news content generators,
researchers (Montal and Reich, 2017) observed what they called
a ‘transparency gap’: organisations avoiding full disclosure regarding the
origins of their computer-generated stories. Tiptoeing around issues of
authorship, organisations may share clues about NLG having factored
into article production, but omit any mention of algorithmic authorship
that may spur negative reactions similar to those expressed by the reviewers
of Parker’s Acne Rosacea. Despite the success of other NLG systems’
personalised outputs, companies reaping rewards from the mass production
of computer-generated articles appear attuned to the potential for reader
resistance, and therefore withhold production details.
3.7 Conclusion
Algorithmic authorship affronts the conventional author–reader relation-
ship – the hermeneutic contract – through hyper-individualistic personali-
sation of reading experiences. Such personalisation may take the form of
40 Publishing and Book Culture
Ik, robot. On the stage stands Ronald Giphart, positioned next to Arie
Rommers, user of one of the world’s most advanced prosthetic hands.
Rommers stands between Giphart and a large robot arm, which has been
placed on a platform adorned with a ceremonious red tablecloth.
Kraftwerk’s 1978 ‘The Robots’ blasts through the sound system.
Slowly, the robot arm lifts a copy of the new edition of Ik, robot, turning
to pass it to Rommers. From robot to cyborg, and then from cyborg to
human, Rommers passes the book to Giphart, who triumphantly raises it
towards the audience. The room erupts in applause.
At the event’s reception, attendees excitedly collect their free copies of
Ik, robot, flipping swiftly to the final chapter by Giphart and the Asibot.
Representatives for the Nao robots and Pepper stand with their machines,
answering questions about the technologies they are touting. The Nao
robots are used in community centres like libraries and seniors’ homes to
dispel fears of new technology; Pepper is used as a novelty in service-
based businesses and corporate events. All representatives assure atten-
dees that these robots repond to users through series of programmed
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
responses. ‘It’s just a puppet,’ one representative asserts. ‘He just does
what you tell him to do.’2
2
Anonymised Pepper representative, conversation with the author
(1 November 2017, Amersfoort).
Reading Computer-Generated Texts 43
well as choices the user has made while engaging with the system.
One recent work that exemplifies the NLG system as tool is Zach
Whalen’s The Several Houses of Brian, Spencer, Liam, Victoria, Brayden,
Vincent, and Alex, a product of the 2017 National Novel Generation Month
(NaNoGenMo). Initiated in 2013 by Darius Kazemi (see Kazemi, n.d.),
NaNoGenMo is a spin-off of the popular National Novel Writing Month
(NaNoWriMo), when participants are challenged to write a novel of 50,000
words or more each November. In NaNoGenMo, though, participants do
not write their novels themselves. Instead, they must write codes capable of
generating 50,000-word novels. While NaNoWriMo defines a novel as an
extended piece of fiction, Kazemi explains (see nanogenmo.github.io) that
in NaNoGenMo ‘[t]he “novel” is defined however you want. It could be
50,000 repetitions of the word “meow”. It could literally grab a random
novel from Project Gutenberg. It doesn’t matter, as long as it’s 50k+
words.’ Since its inception, NaNoGenMo participants have submitted
hundreds of computer-generated novels reflecting a wide range of subjects
and generation methods. In 2014, one participant (hugovk, 2014) generated
50,000 Meows, which changed all of the words in classic works such as
44 Publishing and Book Culture
Melville’s Moby Dick into meows of the same length; ‘Better sleep with
a sober cannibal than a drunken Christian’ translated to ‘Meeoow meoow
meow m meeow meooooow meow m meeeeow Meoooooow.’ In 2015,
another participant (Regan) wrote a code that identified the nearest named
colour for every pixel of a digital photograph of a cover image for
Hemingway’s The Sun Also Rises; the novel, totalling more than 803,000
words, comprises a list of these colours beginning with ‘Quartz. Davy’s
grey. Purple taupe. Gray.’ These kinds of works exemplify the unique
potentialities of NLG as a means for text production. Whalen’s Several
Houses takes a more traditional form, mimicking a typical children’s book,
including large, colourful text and striking imagery. Each ‘house’ is a story
that adopts the form of the classic nursery rhyme ‘This Is the House That
Jack Built’. Page spreads are split into two parts: illustrations on the left, text
on the right. ‘This is the SLEEP that eluded the WORRIED PERSON that
lay in the BED that rested the PERSON that armed the WEAPON that
hurt the PERSON that headed for the HOUSE that Spencer built,’ reads
page 128, which is accompanied by an image of a house blurred by pastel
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
watercolours and overlaid with images like a person, a gun, and a bed
(Whalen, 2018b).
To generate Several Houses, Whalen wrote a program that builds chains
of textual concepts ending in ‘house’. These chains are developed by
referencing ConceptNet, a free online semantic network that links meanings
of words to teach computers the nuances of language and intricacies of
human knowledge using machine learning methods. The accompanying
illustrations likewise stemmed from Whalen’s code. Another program
selects a relevant icon from the Noun Project’s free online repository of
icons, colours that icon, and then randomly places it on the page on top of
another Creative Commons image of a house, which itself has been
randomly selected from Flickr and coloured using a ‘watercolor’ function
on free image-editing software ImageMagick (Whalen, 2018a). As per
NaNoGenMo’s rules, Whalen has publicly shared the source codes for
both the text and the illustrations; more tech-savvy readers may generate
novels in this style for themselves (Whalen, 2018b and 2019).
Whalen’s Several Houses demonstrates the human–computer interplay
that characterises most NaNoGenMo output. It is Whalen’s – not the
Reading Computer-Generated Texts 45
system’s – creative vision that is manifest here. Although the exact output of
the system cannot be predicted prior to generation, contributing to a sense
of computational originality, the system is a tool for realising Whalen’s
predetermined aesthetic: a humorous (and surprisingly adult-themed) take
on a classic children’s story. Really, the system produces a textual artefact
almost indistinguishable from a human-written book, capitalising upon the
conventional codex form (albeit digitised) and engrained linear reading
practices. Through Several Houses’ familiar format, Whalen subtly subverts
the textual and visual genre conventions of children’s literature, both by
satirising a classic nursery rhyme and by quantifying the artistic process so
that it is almost entirely randomised. The reader, recognising Whalen’s play
on conventional children’s literature, may appreciate the hilarity of the text
itself, but may also appreciate the perceived intention that drove the work’s
creation more generally, as well as the skill involved in coding the system.
For her 2015 NaNoGenMo submission entitled Our Arrival: A Novel,
Allison Parrish generated a diary of an expedition through fantastical
fictional places. According to the novel’s preface (written by Parrish),
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
Parrish’s source corpus comprised more than 5,700 sentences from public
domain Project Gutenberg books whose subject entries included strings
related to the natural sciences, exploration, and science fiction, and whose
constructions and content satisfied Parrish’s programmed criteria (Parrish,
2015: iii). By parsing and combining randomly selected sentences according
to their grammatical constituents, Parrish’s system produced
a phantasmagorical text that invites readers on a magic carpet ride through
the outskirts of the imagination. One entry (Parrish, 2015: 223) reads:
nor can it listen. And it is remarkably difficult to imagine a roof with the
simultaneous qualities of mountains, canyons, and crags. Like Whalen’s
Several Houses, Our Arrival subverts the textual conventions of mass-
marketed fiction. Syntactically and semantically, it is a textual absurdity
that draws one’s attention to the potentialities of the linguistic imagination,
pushing the boundaries of the usual by repurposing extant texts in the
Project Gutenberg corpus. Moreover, both works draw attention to the
medium of the book itself. Despite using digital technology not bound to
the conventional codex format, both Whalen and Parrish have chosen to
present their computer-generated texts in familiar codex forms. Such texts
could be seen as bridging a gap between the analogue and the digital, the old
and the new, the mundane and the uncharted. Existing within systemic
structures of literary convention, readers are guided towards the interpre-
tive processes with which they have grown accustomed through cues of
familiar page layouts and intelligible texts.
Despite their similar appearances, though, human-written and compu-
ter-generated texts reflect different processes of production. Rather than
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
(Manjavacas et al., 2017). In 2017, this group launched the Asibot. Using
a graphical user interface, a human author drafts a Dutch-language text,
sentence by sentence, with the Asibot proposing sentences to continue the
story. To learn language and sentence structure, the system was trained
using the texts of 10,000 Dutch e-books. The method of machine learning
employed was largely unsupervised, meaning that the system was pro-
grammed with basic awareness of global narrative structure but otherwise
autonomously distinguished syntactic and semantic patterns within the
source corpus. The system also learned how to mimic the unique writing
styles of such renowned writers as Isaac Asimov and Dutch novelist Ronald
Giphart, and can now generate sentences using similar words and syntax as
these writers. Through machine learning the Asibot cultivated its own lived
experience of sorts: an experience that, while directed by human instruction,
stemmed from particular methods of reading and analysing a body of texts
too extensive for any one human to read in a lifetime. This experience
enables the Asibot to generate text that adheres to syntactic, semantic, and
stylistic convention, while at the same time being sufficiently original.
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
those of the human writer, revealing new avenues for literary traipsing.
Alternatively, a human user could choose to accept all of the system’s
proposals, allowing the system to produce an entire text following from
a single inputted prompt. This latter possibility has been the subject of much
public deliberation, especially in response to OpenAI’s development of their
GPT text generators, which effectively operate in such a way (Radford
et al., 2019).
related to religion, science, and social order. Eisenstein’s use of the word
‘agent’ instead of ‘tool’ nods to the power of the social structures associated
with the printing press over any one individual’s use of the technology.
Even Gutenberg himself is hardly mentioned. Indeed, the printing press has
been not just a tool, but an institution that has transformed social circum-
stances through new labour economies and processes of text production,
dissemination, and reception. Focusing on a technology’s large-scale impact
rather than singular applications draws attention to that technology’s more
general social and hermeneutic implications. A printed text does more than
just transmit words on a page; it represents an established system of labour
and perpetuates the symbolic value of print that constitutes our current
Order of the Book. The Order of the Book assigns special significance to
books (especially in the printed codex form) because the texts held within
those books direct the operation of social institutions (van der Weel, 2011a:
91). It is therefore essential not just to study printed texts as isolated literary
artefacts, but to recognise the sociocultural development of these texts, as
well as the social and technological developments that have influenced their
forms.
Reading Computer-Generated Texts 49
but that computational capacity, output, and influence have led to social
change is clear. Another rebuttal to Eisenstein’s work that explicitly decries
her use of the word ‘agent’ declares that ‘[a]gents are humans with will,
intention and responsibility, while agencies are impersonal: corporations,
machines, the weather, gravity. . . . The alternative is to read agent figura-
tively, as a metaphor, more precisely as a personification wherein a thing or
abstraction is endowed with life’ (Teigen, 1987: 8). In either sense, to deem
an NLG system an agent, rather than just a tool, is to acknowledge the
wider-ranging social structures associated with algorithmic authorship
rather than just NLG’s impact on individuals. It is to acknowledge the
transformative social power of computer-generated texts. Thus, ‘agent’ in
this Element refers neither to a human with will nor a thing endowed with
life. It refers instead to participants in always-mediated processes of com-
munication. ‘Agent’ refers to someone – or something – that substantially
contributes to the creation of output that could be construed as meaningful.
‘Agent’ refers to someone or something with influence.
Some NLG systems may be regarded as fitting comfortably within the
lineage of writing tools, as they quite obviously depend upon embodied
50 Publishing and Book Culture
of a person. You generally know in advance the space in which the output
will fall, but you don’t know details of where it will fall.’ But it is precisely
the unpredictability resulting in original content that prompts a shift in
perceiving some NLG systems from being mere tools for manifesting
human vision to being agents in themselves. Staunch assertions that NLG
systems are no more than tools lack recognition of the potential futures of
development, and hinder discussions of ethical and social circumstances that
may inform judgements of responsibility for system output and claims to
financial gain from that output (Prescott, 2017). Agency is what is per-
mitted. The moment a reader begins searching for the intention driving
a text’s generation is the moment that system is granted agency as a result of
a reader’s attempt to fulfil the conventional hermeneutic contract.
Recognising an NLG system as an agent – and subsequently as an author –
welcomes the NLG system into the realm of what Michel Foucault has
deemed the ‘author function’.
For Foucault, an author’s name is not just a description of textual
ownership and accountability, but also a designation of which texts are to
hold cultural weight. An author’s name allows one to classify and define
Reading Computer-Generated Texts 51
texts in relation to other texts; those texts comprising an author’s corpus are
seen as complementary, and may be juxtaposed with other corpora.
Foucault’s author function thus recognises the author as a cultural construct
that exists within fluid networks of social circumstances, rather than as any
one individual. In Foucault’s (1998: 221–2) words, ‘the author is not an
indefinite source of significations that fill a work; the author does not
precede the works; . . . [t]he author is therefore the ideological figure by
which one marks the manner in which we fear the proliferation of
meaning.’3 In our world of information abundance, this fear of ‘the pro-
liferation of meaning’ is increasingly present; the responsibility to make
knowledge from information rests increasingly with the reader, who must
swim through a sea of non-linear printed works and digital hypertexts that
are ever more disembodied from singular human sources. To fear the
proliferation of meaning is to fear the responsibility of having to interpret
and negotiate the significance of myriad textual artefacts, in either isolation
or relation to one another. Linking a text with an author – however
imaginary that author may be – streamlines processes of meaning-
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
3
Ironically, as some scholars (Wilson, 2004) have observed, Foucault himself
wrestles with the notion of the author as ideological, perhaps due to his own desire
for intellectual recognition and celebrity.
52 Publishing and Book Culture
be anything more than tools because they are incapable of the intentionality
demonstrated by humans. Given that natural languages are humans’ pri-
mary means for interpersonal communication, it is reasonable to assume
that those who communicate through natural language are doing so to fulfil
some communicative goal. Searle’s ‘human program’, producing Chinese
texts without understanding the language, may not have so pointed
a communicative goal, and may therefore be considered to lack the assumed
free-will intentionality of conventional authorial agents.
Expectations of human-specific free-will intentionality have also
informed policy discussions related to authorial credit. In the United
States’ 1979 Final Report of the National Commission on New Technology
Uses of Copyrighted Works (CONTU Final Report), Commissioner John
Hersey declares that ‘a definite danger to the quality of life must come with
a blurring and merging of human and mechanical communication’
(National Commission on New Technological Uses of Copyrighted
Works, 1979: 36–7). He argues that one must not equate human beings
and machines because:
Reading Computer-Generated Texts 55
For Dennett, users’ perceptions of systems are more significant than actual
system functionality. Following from ANT, if users ascribe beliefs and
desires to a system, thereby welcoming the system into a social network,
that system becomes a social agent. In one example of the intentional stance,
Dennett considers a thermostat. One may regard a thermostat as a mere
control mechanism for regulating temperature, whose regulatory function-
ality may be applied to myriad situations (e.g. a car’s cruise control). The
thermostat’s technology is not constrained to any one boiler, or even to the
domain of temperature. However, the more the thermostat is integrated into
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
the world – the more complex its internal operations become as a result of
enriched connections with other entities associated with its assigned task
(e.g. the thermostat purchases its own boiler fuel or checks a house’s
weather stripping) – the more one can say that the thermostat ‘has beliefs
about heat and about this very room, and so forth, not only because of the
system’s actual location in, and operations on, the world, but because we
cannot imagine another niche in which it could be placed where it would
work’ (Dennett, 1987: 31). The thermostat becomes a distinctive agent that
fulfils a certain social function, prompting the emergence of new organisa-
tions of behaviour and thought that centre on the thermostat. David
Herman’s (2008: 257) more recent consideration of the intentional stance
applies it to narrative theory, concluding that ‘stories simultaneously reflect
and activate a disposition to adopt the intentional stance.’ Connecting
Dennett’s notion of the intentional stance with more recent developments
in AI and algorithmic authorship, there are efforts in the field of explainable
AI/explainable computational intelligence to enable machine learning sys-
tems to describe their decision-making processes, often in natural lan-
guages, and these descriptions may support a sense of autonomous
Reading Computer-Generated Texts 57
construct models of how others may be feeling and acting, models that
coevolve with our ongoing interior monologues describing and interpreting
to ourselves our own feelings and behaviors.’ Writing about computer-
generated texts in particular, Manuel Portela (2018: 195) notes that ‘[e]ven if
there is a certain degree of mathematical randomness in the verbal output,
linguistic combinations will have emergent meanings that will be read
literally. Random textual instantiations thus open up machinic constraints
to the unconscious of the reader.’ All this is to say that the narrative text – in
this case a computer-generated text – prompts the production of another
narrative about that text within the readerly mind. The reader seeks
authorial intention through engagement with engrained hermeneutic pro-
cesses, fulfilling Dennett’s intentional stance. If a reader who did under-
stand Chinese were to read the text produced by Searle’s human program,
for example, that reader would still consider the text in light of the assumed
intentionality of an imagined authorial agent. We legitimise our lived
experiences to ourselves and others by using natural language, and the
use of natural language alone is enough to provoke instinctual ascribing of
beliefs and desires to an imagined author so as to discern the assumed
58 Publishing and Book Culture
the co-creative process and is affirmed the moment a reader begins inter-
preting its output. The writing system, human writer, and readers exist
within a fluid network of social relationships rooted in reciprocal exchanges
related to power, whether explicitly or implicitly acknowledged. This is not
to neglect the important questions of moral, financial, and legal responsi-
bility for textual content. Recognising an NLG system as a social agent does
not automatically make that system solely responsible for its output; ques-
tions of responsibility are still to be sufficiently pondered. Recognising an
NLG system as a social agent, though, does allow for the situation of NLG
within a lineage of textual technologies and literary studies. Readers con-
sider computer-generated output in light of their expectations for human-
written texts. ‘When faced with a totally new situation,’ Marshall McLuhan
(with Fiore and Agel, 1967: 74) writes, ‘we tend always to attach ourselves
to the objects, to the flavor of the most recent past. We look at the present
through a rear-view mirror. We march backwards into the future.’
Computer-generated texts bring conventional understandings of the
author–reader relationship – what I have called the hermeneutic contract –
into question. The author of a computer-generated text is ambiguous:
60 Publishing and Book Culture
Shillingsburg rightly observes, the precise ways readers may negotiate this
communicative function are not so easily anticipated. One oft-referenced
stance is the ‘death of the author’: Roland Barthes’ 1967 argument that
attributing an author to a text constrains its readers’ interpretations of that
text given an assumption that there is one definitive meaning established by
the writer. Barthes (1977: 148) explains that ‘[c]lassic criticism has never
paid any attention to the reader; for it, the writer is the only person in
literature. . . . [W]e know that to give writing its future, it is necessary to
overthrow the myth: the birth of the reader must be at the cost of the death
of the Author.’ But although the printed word is an artefact largely
disembodied from its initial writer through the mechanised and communal
processes of its production, the sense of authorial intention is an integral
part of its legitimacy. As a reader reads, she interprets a text in light of an
assumed communicational intention informed by a commitment to mutual
understanding that itself is informed by shared sociocultural traditions and
appropriate language usage. Whether the reader chooses to accept what she
believes was the author’s intention during writing, or whether she chooses
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
hand, the allowances I have been making for Racter all along
are stretched to the breaking point when Racter mentions
that besides their love they also have typewriters. Invited to
share in this extraordinary insight, I tremble on the brink of
a completely unknown mental world, one that I would
prefer not to enter.
Racter’s syntactic bumbling, jumping from one topic to another, often with
no logical connections between topics, leaves Dewdney with the sense that
he is reading the work of an author unhinged. Nevertheless, Dewdney
acknowledges the ‘extraordinary insight’ that this output might offer him,
despite his reluctance to enter into the computational writer’s mind.
In another review of computer-generated text, Charles Hartman (1996: 71)
recounts his experiences developing a poetry generation system, explaining
why the output of his AutoPoet could not be deemed a ‘victory’. For context,
the AutoPoet generated such output as:
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
‘All our habits of reading are called upon, all the old expectations, and
then let down,’ Hartman declares. However, this Element, supported by
a series of empirical studies investigating reader responses to computer-
generated texts (Henrickson, 2019a/b), shows that these expectations are
not let down. Readers continue to engage in processes of meaning-
making when faced with computer-generated texts, however dull or
nonsensical those texts may be. Readers continue to imagine the author,
even for a computer-generated text, because they instinctually aim to
fulfil the hermeneutic contract to which they are accustomed. Although
Hartman believes his AutoPoet failed because it did not sufficiently draw
from its own lived experiences and emotional perceptions to produce
truly ‘unique’ work, the only expectations that are actually let down are
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
written texts. Reviewing Racter’s The Policeman’s Beard nearly two decades
after its publication, experimental writer Christian Bök (2002: 10) more
bluntly writes that ‘RACTER is a mindless identity, whose very acephalia
[absence of a head] demonstrates the fundamental irrelevance of the writing
subject in the manufacture of the written product. The involvement of an
author in the production of literature has henceforth become discretionary.’
Similarly, novelist and critic Steve Tomasula (2018: 50) has argued that
computer-generated texts exemplify a kind of ‘postliterary literature’ that ‘is
informed by an accompanying posthuman ethos – one that is at odds with
an ethos based upon the uniqueness of the individual, and its cousins,
especially originality.’ But an algorithmic author does not ask us to imagine
a world in which there are no more humans undertaking textual labour. It
simply prompts us to imagine a world in which algorithmic authorship has
a place alongside human authorship. It encourages us to adapt to the new
circumstances of an increasingly digital age. And, most importantly, NLG
systems may produce different kinds of texts altogether: ones that challenge
readers’ syntactic and semantic expectations and demand their own inter-
pretive approaches. As a result, value judgements about system output are
Reading Computer-Generated Texts 67
4.5 Conclusion
Disembodied from the text, save for perhaps an accompanying name, the
author has long been imagined by the reader through participation in the
hermeneutic contract. For computer-generated texts, though, the author is
not so clear a figure, not so easily personified through consideration of
linguistic tone or perceived interpersonal connection. Yet the reader’s
assumption of the conventional hermeneutic contract continues to inform
the reading experience, and the NLG system is evaluated through compar-
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
ison to the human writer. When a system produces output that exists
alongside human-written text, readers may interpret this output as reflecting
the same commitment to adhere to and perpetuate shared sociocultural
traditions as that presumed of a human writer: in such instances, these are
the traditions of appropriate semantic and syntactic structures. In this way,
the NLG system itself takes on a fundamentally social role distinct from that
of its developers. It stands on the fence between tool and agent, teetering
ever closer to the realm of agency that characterises modern conceptions of
authorship. Recognising NLG systems as agents in themselves, as social
institutions with transformative power rather than mere tools for actualising
human vision, serves to counteract comparisons of NLG systems to human
writers, permitting clearer recognition of the unique contributions of
algorithmic authorship to the social and literary landscapes. The type of
agency exerted by these systems need not be characterised by the free-will
intention of human writers, but may instead be characterised by the
programmed intention to fulfil a designated objective that is respected
within wider social networks: a deterministic approach to being, so to
speak. Algorithmic authorship necessitates new infrastructures of text
68 Publishing and Book Culture
augment the speed and power of human capacities for text production,
extending human faculties. While they may take the place of some human
writers, they remain dependent upon humans for their functionality, opera-
tion, and maintenance. They contribute to new sociologies of text, offering
alternative textual performances wherein the reader’s experience is priori-
tised over that of the author. Acknowledging NLG systems as agents rather
than mere tools prompts reflection upon the distinct labour economies
driving and emerging from algorithmic authorship. Such acknowledgement
facilitates recognition of the transformative social power of system
output, contributing to more nuanced discussions of how algorithmic
authorship conforms to and/or confronts conventional author–reader
relationships.
And so we return to Amersfoort, in de Bibliotheek Eemland. A robot
arm lifts a copy of the new edition of Ik, robot, passing it to a man with an
advanced prosthetic hand: a cyborg. The cyborg then passes the book to the
human, who raises it for the audience to see. The crowd cheers, excited to
read the synthetic literature that is the product of human–computer colla-
boration. NLG systems such as those used to write a new story for Ik, robot
Reading Computer-Generated Texts 69
5 Conclusion
Let us now revisit Simon, sitting in his local café with his newspaper set
down on the table in front of him. ‘Majority of New Mothers in
Wolverhampton Are Unmarried’ (Anon., 2017) reads the title of one
story. Rather than being written by a human, though, this story has been
computer-generated by a company called Urbs Media, which boasts its
technological power for ‘crafting stories and harnessing automation to mass
localise’ (Urbs Media). Now I feel like I’m on the set of a sci-fi film, Simon
thinks to himself.
Despite sounding science-fictional, computer-generated texts abound.
Data-driven news articles like Simon’s ‘Majority of New Mothers’ are
common applications of NLG. However, as has been shown throughout
this Element, news is only one of NLG’s many domains. Machines are
generating texts for expository and aesthetic purposes, and have been doing
so for more than a century.
Computer-generated texts in their current forms, however, bring mod-
ern conventions into question. In our current state – the Order of the
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
with narratives produced just for them, affirming their senses of self within
an age of information overload that may otherwise seem stifling to any one
voice. Although the Order of the Book is challenged by the sheer prolifera-
tion of texts that permit hyper-individualised reading, NLG developers
capitalise upon ingrained assumptions of textual and codicological value.
Computer-generated texts represent today’s digital ecology, transform-
ing what has largely been seen as an extension of the human self – text, the
claim to authorship – into an esoteric entanglement of human and computer
involvements. Ultimately, NLG is about computational choice and creativ-
ity. According to its programmed instructions, the system must choose what
content to include in its output, and how to convey selected output. By
applying a sociological perspective to an analysis of computer-generated
texts, though, we can see these texts more clearly for what they are: human
artefacts, produced by humans for human reception. These artefacts have
notable implications for the social and literary spheres, with one of the most
significant but implicit being that they bring into question the conventional
imagined relationship – the hermeneutic contract – between reader and
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
author. I have therefore argued for a semantic shift from the NLG system as
tool to the system as agent, based on a broader understanding of agency as
defined by both free-will and programmed forms of intentionality. The
NLG system holds social power because readers attempt to derive meaning
from its output. The NLG system is a communicative medium with the
ability to transform social circumstances with its output, and even its very
presence. It has social agency; it is a social agent; it truly becomes an
algorithmic author.
In conversation at the Electronic Literature Organization’s 2018 con-
ference, one artist whose practice centres on producing computer-generated
text solicited advice.4 ‘When my work does get press,’ she explained, ‘the
output is often attributed to my bots, rather than to me. But I am the one
who creates. How do I make sure I am the one being given credit?’ This
leads to what are possibly the most important questions arising from this
discussion, which have gone unaddressed until this point. What are we to
do with this theorising? How do we want readers to respond to these
4
Anonymised poet, conversation with the author (15 August 2018, Montreal).
72 Publishing and Book Culture
such scholarship has largely focused on the literary merits of specific texts,
with occasional investigations into the phenomenological experiences of
readers engaging with them. Future studies may also benefit by drawing
from the field of affective NLG, which is defined by its efforts to evoke
emotional responses in readers through conscious employment of words and
sentence structures. However, the well of empirical studies related to affective
NLG and the aesthetic experiences prompted by computer-generated output
is remarkably shallow, and has been largely untouched by researchers outside
of computer science. Supplemented by a team that incorporates the electronic
literature community and affective NLG researchers, humanities researchers
are especially well suited to catalyse this area of study, given their attention to
the subtle ways language may be used for persuasion, entertainment, and
aesthetic purposes. By integrating these disciplines’ quantitative and qualita-
tive tendencies, respectively, a deeper understanding of NLG’s potential for
overcoming common perceptions of mechanised impassivity may be realised.
Humanities researchers are also well suited to scrutinising the metaphors
both developers and ordinary readers use to talk about NLG systems. The
humanities have long been attuned to the ways in which metaphor reflects
74 Publishing and Book Culture
and humanities scholars with eyes that are trained to navigate through
murky grey areas.
Finally, Section 4’s argument for the consideration of NLG systems as
social agents may be extended to systems that produce computer-generated
music, visual art, or other kinds of output typically regarded as manifesta-
tions of human communicational intention and/or creativity. These dis-
cussions have already begun, but may benefit from the incorporation of that
which has been argued here.
We have long been at a point at which systems are generating texts that
are indistinguishable from those written by humans. In recent years, how-
ever, algorithmic authorship has become ever more pervasive in the every-
day lives of ordinary readers. Computer-generated news articles and
business reports hold increasing power to inform readers’ understandings
of themselves and the world around them. Whether a personalised audit
report (with the help of Narrative Science), a bespoke book-length report
about Facial Tissue Stock Excluding Toweling, Napkin, and Toilet Paper (à la
Philip Parker), or a 50,000-word ‘novel’ produced in observation of
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
Bay Republican, 1.
(1841b). Machine Poetry. The Cincinnati Enquirer, 2.
(1844). The New Patent Novel Writer. Punch, 6, 268.
(1845). A Latin Hexameter Machine. The Athenæum, 921, 621.
(1846). The Life of Wollaston. In The British Quarterly Review, vol. 4.
London: Jacking & Walford, 81–115.
(2010). Robot with Mechanical Brain Thinks Up Story Plots (March 1931).
Modern Mechanix. http://blog.modernmechanix.com/robot-with-
mechanical-brain-thinks-up-story-plots [accessed 21 September 2017].
(2016a). Oakville A’s 11U AAA (Mosquito) Outhits Vaughan in 7–3 Defeat.
GameChanger. https://gc.com/game-57ca09c8348c02c25400003c/recap-
story [accessed 24 March 2017].
(2016b). Recap Stories Now Available in the App! GameChanger Blog.
https://blog.gc.com/2016/07/18/recap-stories-app [accessed 26
November 2016].
78 References
Bowman, S. R., Vilnis, L., Vinyals, O., et al. (2016). Generating Sentences
from a Continuous Space. In Proceedings of the 20th SIGNLL
Conference on Computational Natural Language Learning. Berlin:
Association for Computational Linguistics, pp. 10–21.
Bryson, J. J. (2018). Patiency Is Not a Virtue: the Design of Intelligent Systems
and Systems of Ethics. Ethics and Information Technology, 20, 15–26.
BTN.com Staff. (2012). First Quarter Recap: UNLV 3, Minnesota 0. Big
Ten Network. http://btn.com/2012/08/30/first-quarter-track-minne
sota-at-unlv [accessed 16 April 2018].
(2013). Wisconsin Beats Michigan, 68–59. Big Ten Network. http://btn
.com/2013/03/15/track-no-4-wisconsin-vs-no-5-michigan [accessed
26 November 2016].
Casebourne, I. (1996). The Grandmother Program: A Hybrid System for
Automated Story Generation. In Creativity and Cognition 1996
Conference Proceedings. Loughborough: The Creativity and
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
University of Wisconsin.
Koolhof, K. (2017). Ronald Giphart experimenteert met literaire robot. AD.
nl. www.ad.nl/wetenschap/ronald-giphart-experimenteert-met-litera
ire-robot~a5a3cb9f [accessed 1 October 2017].
Krittman, D., Matthews, P., and Glascott, M. G. (2015). Innovation Ushers in
the Modern Era of Compliance. Deloitte. www2.deloitte.com/content/
dam/Deloitte/us/Documents/finance/us-fas-how-natural-language-
is-changing-the-game-deloitte-only.pdf [accessed 15 March 2018].
Lake, R. W. (2017). Big Data, Urban Governance, and the Ontological
Politics of Hyperindividualism. Big Data & Society. https://doi.org/
10.1177/2053951716682537 [accessed 1 July 2019].
Latour, B. (2005). Reassembling the Social: An Introduction to Actor-Network-
Theory. Oxford: Oxford University Press.
Lebowitz, M. (1985). Story-Telling as Planning and Learning. Poetics, 14,
483–502.
84 References
McLuhan, M., Fiore, Q., and Agel, J. (1967). The Medium is the Massage:
An Inventory of Effects. New York: Bantam Books.
Meehan, J. R. (1976). The Metanovel: Writing Stories by Computer.
Unpublished Ph.D thesis, Yale University. www.semanticscholar
.org/paper/The-Metanovel%3A-Writing-Stories-by-Computer-
Meehan/35f03721ecef2a7315a8d85d02bacaf00660a3fb [accessed 21
November 2020].
Menabrea, L. F. (1843). Sketch of the Analytical Engine Invented by
Charles Babbage, Esq. In A. A. Lovelace, trans., R. Taylor, ed.,
Scientific Memoirs, Selected from the Transactions of Foreign
Academies of Science and Learned Societies, and from Foreign Journals,
Vol. III. London: Richard and John E. Taylor, pp. 666–731.
Method and Apparatus for Automated Authoring and Marketing (US
7266767 B2). (2006). Google Patents. https://patents.google.com/
patent/US7266767 [accessed 9 October 2017].
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
Pinch, T. J., and Bijker, W. E. (1984). The Social Construction of Facts and
Artefacts: Or How the Sociology of Science and the Sociology of
Technology Might Benefit Each Other. Social Studies of Science, 14,
399–441.
Podolny, S. (2015). If an Algorithm Wrote This, How Would You Even
Know? The New York Times. www.nytimes.com/2015/03/08/opi
nion/sunday/if-an-algorithm-wrote-this-how-would-you-even-
know.html [accessed 30 April 2018].
Polti, G. (1895). Les trente-six situations dramatiques. Paris: Mercure de
France.
Portela, M. (2018). Writing under Constraint of the Regime of Computation.
In J. Tabbi, ed., The Bloomsbury Handbook of Electronic Literature.
London: Bloomsbury, pp. 181–200.
Powers, T. M. (2013). On the Moral Agency of Computers. Topoi, 32(2),
227–36.
88 References
Prescott, T. J. (2017). Robots Are Not Just Tools. Connection Science, 29(2),
142–9.
Racter [Chamberlain, W., and Etter, T. ]. (1984). The Policeman’s Beard Is
Half Constructed. New York: Warner Software/Warner Books.
Radford, A., Wu, J., Amodei, D., et al. (2019). Better Language Models and
Their Implications. OpenAI. https://openai.com/blog/better-lan
guage-models [accessed 20 August 2020].
Ray Murray, P., and Squires, C. (2013). The Digital Publishing
Communications Circuit. Book 2.0, 3(1), 3–23.
Reeves, B., and Nass, C. (1996). The Media Equation: How People Treat
Computers, Television, and New Media Like Real People and Places.
Cambridge: Cambridge University Press.
Regan, D. (2015). The Cover of The Sun Also Rises. GitHub. http://
alsorises.org [accessed 29 August 2018].
Roberts, S. (2017). Christopher Strachey’s Nineteen-Fifties Love Machine. The
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
ter-model-of-creativity-and-Turner/745b24c90f089339f7e9e2209d4a
cebfb2f1ec82 [accessed 21 November 2020].
Urbs Media. (n.d.). www.urbsmedia.com [accessed 25 February 2018].
Valtteri (n.d.) [archived by Internet Archive]. https://web.archive.org/web/
20190120010217/https://www.vaalibotti.fi [accessed 18 August 2020].
van der Weel, A. (2001). The Communications Circuit Revisited. Jaarboek
voor Nederlandse boekgeschiedenis, 8, 13–25.
(2011a). Changing Our Textual Minds: Towards a Digital Order of
Knowledge. Manchester: Manchester University Press.
(2011b). Our Textual Future. Logos, 22(3), 44–52.
(2014). From an Ownership to an Access Economy of Publishing. Logos,
25(2), 39–46.
(2015). Appropriation: Towards a Sociotechnical History of Authorship.
Authorship, 4(2). https://doi.org/10.21825/aj.v4i2.1438 [accessed 1 July
2019].
https://doi.org/10.1017/9781108906463 Published online by Cambridge University Press
ASSOCIATE EDITOR
Leah Tether
University of Bristol
Leah Tether is Professor of Medieval Literature and Publishing
at the University of Bristol. With an academic background in
medieval French and English literature and a professional
background in trade publishing, Leah has combined her
expertise and developed an international research profile in
book and publishing history from manuscript to digital.
ADVISORY BOARD