Lexical preferences in an automated story writing system (bibtex)
by Melissa Roemmele, Andrew S. Gordon
Abstract:
The field of artificial intelligence has long envisioned the ability of computers to automatically write stories (Dehn [1981], Lebowitz [1985], Meehan [1977], Turner [1993]). For a long time, progress on this task was limited by the difficulty of encoding the vast narrative knowledge needed to produce stories with diverse content. The rise of data-driven approaches to AI introduced the opportunity to acquire this knowledge automatically from story corpora. Since then, this approach has been utilized to generate narratives for different domains and genres (Li et al. [2013], McIntyre and Lapata [2009]), which has in turn made it possible for systems to collaborate with human authors in developing stories (Khalifa et al. [2017], Manjavacas et al. [2017], Swanson and Gordon [2012]). Roemmele and Gordon [2015] introduced a web-based application called Creative Help that provides automated assistance for writing stories. The interface consists of a text box where users type “\textbackslashhelp\textbackslash” to automatically generate a suggestion for the next sentence in the story. One novelty of the application is that it tracks users’ modifications to the suggestions, which enables the original and modified form of a suggestion to be compared. This enables sentences generated by different models to be comparatively evaluated in terms of their influence on the story. We examined a dataset of 1182 Creative Help interactions produced by a total of 139 authors, where each interaction consists of the generated suggestion and the author’s corresponding modification. The suggestions were generated by a Recurrent Neural Network language model (RNN LM), as described in Roemmele et al. [2017], which generates sentences by iteratively sampling words according to their observed probability in a corpus. The training corpus for the model analyzed here was 8032 books (a little over half a billion words) in the BookCorpus1, which contains freely available fiction from a variety of genres. This paper briefly characterizes the generated sentences by highlighting their most prominent words and phrases and showing examples of them in context.
Reference:
Lexical preferences in an automated story writing system (Melissa Roemmele, Andrew S. Gordon), In Proceedings of the 31st Conference on Neural Information Processing Systems (NIPS 2017), 2017.
Bibtex Entry:
@inproceedings{roemmele_lexical_2017,
	address = {Long Beach, CA},
	title = {Lexical preferences in an automated story writing system},
	url = {http://people.ict.usc.edu/~gordon/publications/NIPS-WS17},
	abstract = {The field of artificial intelligence has long envisioned the ability of computers to automatically write stories (Dehn [1981], Lebowitz [1985], Meehan [1977], Turner [1993]). For a long time, progress on this task was limited by the difficulty of encoding the vast narrative knowledge needed to produce stories with diverse content. The rise of data-driven approaches to AI introduced the opportunity to acquire this knowledge automatically from story corpora. Since then, this approach has been utilized to generate narratives for different domains and genres (Li et al. [2013], McIntyre and Lapata [2009]), which has in turn made it possible for systems to collaborate with human authors in developing stories (Khalifa et al. [2017], Manjavacas et al. [2017], Swanson and Gordon [2012]).

Roemmele and Gordon [2015] introduced a web-based application called Creative Help that provides automated assistance for writing stories. The interface consists of a text box where users type “{\textbackslash}help{\textbackslash}” to automatically generate a suggestion for the next sentence in the story. One novelty of the application is that it tracks users’ modifications to the suggestions, which enables the original and modified form of a suggestion to be compared. This enables sentences generated by different models to be comparatively evaluated in terms of their influence on the story.

We examined a dataset of 1182 Creative Help interactions produced by a total of 139 authors, where each interaction consists of the generated suggestion and the author’s corresponding modification. The suggestions were generated by a Recurrent Neural Network language model (RNN LM), as described in Roemmele et al. [2017], which generates sentences by iteratively sampling words according to their observed probability in a corpus. The training corpus for the model analyzed here was 8032 books (a little over half a billion words) in the BookCorpus1, which contains freely available fiction from a variety of genres. This paper briefly characterizes the generated sentences by highlighting their most prominent words and phrases and showing examples of them in context.},
	booktitle = {Proceedings of the 31st {Conference} on {Neural} {Information} {Processing} {Systems} ({NIPS} 2017)},
	author = {Roemmele, Melissa and Gordon, Andrew S.},
	month = dec,
	year = {2017},
	keywords = {Narrative, UARC}
}
Powered by bibtexbrowser