Writing styles

IBC2022 Tech Papers: Artificial Intelligence (AI) Won’t Write Award-Winning Scripts Soon, But It Could Help Humans Who Do | Technical documents

Summary

Assessing the quality and relevance of results from natural language generation (NLG) systems is a challenge. It is difficult to assess empirically. The proposed Story and Script Preliminary Evaluation (SSEF) framework aims to address this issue by combining qualitative and quantitative methods to evaluate AI-created material using four criteria – plotting elements of stories codified on Freytag’s pyramid, measure emotional connection and reactions, assess whether scenes created or the story flows logically, and determines how real or plausible the story is. The intended framework is flexible enough to work with different genres of stories, although it is primarily designed for scripts and screenplays and short or long-form novels. A key feature of SSEF is that it examines AI-generated content from the perspective of the reader or audience member. It focuses on the impact of a story on the individual and not on technology or adherence to a particular narrative theory or genre of story. Developing techniques to rationalize and assess emotional criteria requires a deep understanding of emotions, emotional connection and emotional responses and the connection between the author/writer and an audience. Achieving this requires recognizing the importance of empathy and emotional connection in storytelling. Injecting empathy could also allow AI to create contextually correct and emotionally stimulating stories. If or when the AI ​​achieves intense, high-level connections with an audience, then its storytelling will have the ability to be more immersive, more engaging, more compelling, and ultimately more enjoyable. Part of NLG’s evolution is the development of tools such as the Story and Script Evaluation Framework which can provide another way to refine story creation and alleviate the problem of lagging. ‘AI that doesn’t know the meaning of the sentences it was creating or the implications of decisions made by the build engine regarding the emotional depth of a story.

Introduction

This article offers a preliminary assessment framework for developing stories and screenplays that identifies four criteria—creative, emotional, information flow, and realism—as a way to classify and review materials created by artificial intelligence. (IA). The evaluation process will be undertaken from the perspective of the reader or audience and not focused on the technology, dataset, or natural language generation characteristics. As an alternative to untrained automatic metrics techniques, SSEF will also enable the evaluation of human-authored material, facilitating a side-by-side comparison of, for example, the written output of AI and humans created at from the same folder from the perspective of the audience. Techniques such as untrained automatic metrics only look at the text and not its effect on a person. To undertake an evaluation using the suggested criteria, one will need to codify the elements of the story, measure emotional connectedness and reactions, assess whether the scenes created or the story unfold logically, and determine how well the story is real or plausible.

Using storytelling, imagery, and drama, to communicate the purpose of a story is to evoke an emotional response and move a person in some way. Stories are descriptive or narrative in form and include poetry, fiction, short stories, scripts, and screenplays. Regardless of form, these styles are all made up of the same basic elements and all aim to engage and emotionally affect an audience.

Automatically creating stories using AI requires natural language generation (NLG) technologies to create long, coherent passages that realistically express a logical progression of events in the best possible way. Artificial Intelligence (AI) has had some success in writing explanatory passages that have featured in newspaper articles and as background profiles of, for example, sports stars. Currently, the approximate 1500 word limit is based on the level of NLG technology development. It is an emerging technology with many unresolved issues and challenges resulting from the scarcity and complexity of data and the dynamic characteristics of the data available for the system to use for learning.

There are sample scripts and short stories created using one of the many natural language generation engines. Some scripts have also been turned into shorts that are viewable but not (yet) Oscar-winning. Current projects in the public domain have all found ways to circumvent AI shortcomings such as the word limit or the use of less than perfect datasets. The other key limitation is the lack of empathy in the AI ​​hardware. Empathizing and developing an emotional connection with an audience is crucial for AI to be a truly useful tool for creating stories and scripts or for being used to help human authors and writers. While cognitive and intellectual empathy can be learned, emotional empathy must be experienced. This is something the AI ​​can’t do – it can learn but can’t really understand it because it hasn’t experienced it. This presents a big challenge for AI.

The proposed framework will be suitable and adaptable for use in all five genres of writing – expository, descriptive, narrative, persuasive, and journals and letters. However, the current focus is on fiction, scripts and screenplays, as these areas offer the greatest challenge as well as the greatest opportunity to innovate and build a body of knowledge on methods and techniques to assess , review, edit and curate AI-created text. There are techniques such as extrinsic or task-based assessment, subjective human assessments, and measurements based on automated systems such as LEPOR, RED, BLUE, and METEOR. Automated solutions are primarily language- and synthesis-based and do not evaluate AI material from the perspective of the intended audience. Subjective and task-based human assessments are currently manual and time-consuming processes. The proposed preliminary story and screenplay assessment framework addresses time and cost issues by combining existing understanding of story structure with new data collection methods to build a body of knowledge about the emotional reaction and intensity associated with story elements and viewpoint views. from public view.

Much of the activity in the realm of AI and storytelling/creation revolves around mechanics and structure. Exhibitors are like builders and owners, more focused on the building (what it is) and not on who the tenants are (who creates the material) and what they do (their creations) and how much people appreciate the structure (their emotional reaction or attachment). AI can write fiction, poetry, short stories, scripts and screenplays – but it needs human intervention to set a premise and basic story elements. AI cannot think for itself. He needs a reference. While Natural Language Generation uses knowledge of the art of how humans communicate, it does not know creativity or the spark of an idea.

Download the paper below