Telling Stories, Telling Lies: NWO Open Competition SSH XS Grant

Next academic year (2023-’24), I will be working on a research project for which I have received an open competition grant from NWO. It is titled Telling Stories, Telling Lies: The Role of Narrative Competence in the Detection and Interpretation of Online Misinformation. In this study, I map out the narrative characteristics of online misinformation and design a measuring instrument for ‘narrative competence’: knowledge of stories and the skills to analyze them. With a large-scale survey study among students, I will investigate whether narrative competence helps in detecting and interpreting online misinformation. The study will provide insight into how we can counteract misinformation through literature education. Below, you can read my full proposal.


Telling Stories, Telling Lies: The Role of Narrative Competence in the Detection and Interpretation of Online Misinformation

In today’s media-sphere countless, often contradictory, narratives proliferate: on social media, on Netflix, in politics and advertisements. Because of their persuasive powers, stories can be used to deceive and misinform [1, 2]. Falling for deception and relying on fake news can have serious repercussions, for instance a mistrust in media, not getting proper medical treatment, belief in harmful conspiracies, and even an impediment to the democratic process. Assessing the trustworthiness of narratives on social media is, therefore, a pressing contemporary concern. Persuasive messages on social media that lack in trustworthiness (usually classified as fake news) are studied by different (sub)disciplines, for example persuasion studies and communication and information sciences. However, these do not usually take the narrative characteristics of a text into account, such as narrative voice, perspective, plot, characterization, etc. Meanwhile, a large proportion of social media posts is either written in the form of a narrative or has elements of ‘narrativity’ [3], and we know that this narrative form has the potential to add to the persuasiveness of a message [1, 2]. It follows that narrative competence (defined as the ability to interpret, analyze, and evaluate how stories are constructed) will help readers to better detect and interpret misinformation online. Surprisingly, this has not been tested yet.

To test this hypothesis, Telling Stories, Telling Lies (TSTL) builds on narratology. This subdiscipline of literary studies that focuses on narrative offers a wealth of theoretical materials when it comes to the analysis and interpretation of unreliability in, and of, stories and narrators [4-8]. So far, these theories have mostly been applied to fictional stories, for instance in literature and film. Bringing narratological knowledge and skills to bear on the contemporary challenges of online misinformation allows us to better understand the hermeneutic and cognitive processes that come into play when readers are confronted with fake news. Through familiarity with narratives and knowledge of how they are constructed, which are derived from a literary education, readers learn when to adopt a trusting or a vigilant attitude towards the message and they adapt their reading strategies accordingly. The present study will investigate the processes involved in this detection and interpretation of unreliability in narratives on social media, to answer the question whether ‘narratively competent’ readers are indeed better at dealing with fake news. TSTL aims to (1) make an inventory of narrative characteristics that signal unreliability in social media messages; (2) develop a measuring scale for the construct ‘narrative competence’; and (3) test whether the skills associated with it are transferable to the social media domain.

[Innovation and Impact] TSTL will be the first study to test whether narrative competence furthers readers’ abilities to detect and interpret misinformation on social media. The project integrates (cognitive) narratology with persuasion and information studies as well as reader research. Scientifically, this has groundbreaking potential, not just for literary studies, but also for these other fields that study the effects and assessment of misinformation. Societally, the insights derived from this project can provide a basis for training narrative competence, which helps to develop an understanding of types and degrees of trustworthiness and heightens the ability to understand others’ motivations. This would have a positive impact on literature and information education, which meets current priorities on the Dutch Research Agenda (NWA), especially the theme Youth & Digitalization. In sum, TSTL contributes to building public resilience in the face of misinformation.

[State of the Art] Combined with empirical reading research, narratology offers us the tools to determine what constitutes ‘unreliability’ in narrative. Recent scholarship holds that the evaluation of untrustworthiness occurs at the intersection of bottom-up and top-down processes [4]. Detection of unreliability is based on textual clues (bottom-up reading processes) [5]. Readers then relate to their own conceptual frameworks to the text to determine the extent of trustworthiness (top-down reading processes). Such frameworks include their beliefs, norms, and knowledge of the world and stories they have previously encountered. If this intersection triggers a discrepancy, a reader feels the urge to evaluate this discrepancy, make an assessment of the text’s reliability, and adopt a cautious reading strategy [6, 7].

In this reading process, an invaluable skill to help assessing unreliability is Theory of Mind (ToM), the ability to understand others’ mental states (thoughts, feelings, motivations) [10, 11]. ToM is important for detecting and evaluating dishonesty in communication and helps to recognize strategies of persuasion and deception in advertising [12, 13]. It has been suggested that familiarity with narrative fiction enhances ToM [10], but I argue that familiarity or exposure is not enough: a reader needs narrative competence to develop an understanding of types and degrees of unreliability.

Narratology helps us to discern these types and degrees of unreliability. Phelan [8] has developed a typology of unreliable narrators over three main axes of communication: facts and events (mis- or underreporting), understanding (mis- or underinterpreting), and values (mis- or underevaluating). His typology allows for distinctions between (i) untrustworthy narrators who willfully mislead us; (ii) fallible ones who naïvely share their misinterpretation of events; and (iii) immoral ones who misevaluate events.[1] Besides types of unreliability, narrative theory helps us to discern degrees. No narrator can be considered fully reliable. If the narrative is written from a particular perspective, it is necessarily limited in terms of perception and knowledge. But even an ‘impersonal’ narrator makes a selection: telling a story, we always leave out certain elements or details [7]. This helps us to come to a more fine-grained conceptualization of misinformation, beyond popular solutions centered on fact-checking. An understanding of such distinctions fosters narrative competence, which encompasses skills like the detection of textual clues and discrepancies, perspective-taking and ToM, inferencing (e.g., about intentions and motivations), and filling in gaps. But as said, it has yet to be explored to what extent these skills are transferable to other, non-fictional environments like social media.

In sum, the theoretical framework for this study brings together narratological insights with those provided by persuasion studies in communication and reading research. This new theoretical framework (see figure 1) focusses on cognitive and hermeneutic reading processes involved in detecting and interpreting untrustworthiness in (online) narratives. In figure 1, these processes are depicted (in simplified form) as a flow diagram of successive processes, in which all stages are informed by a reader’s narrative competences, background knowledge, and Theory of Mind.

Research Questions                                                                            Method

1. What text-specific narrative characteristics signal untrustworthiness in social media posts? Literature review
2. How do we measure narrative competence? How is it conceptualized and operationalized? Survey design: assessment, adaptation, and evaluation of narrative competence scale
3. Are these skills transferable, i.e., are narratively competent readers better at detecting and interpreting misinformation in social media narratives? Analytic survey design and data analysis

[Approach] TSTL investigates whether narrative competence gained through literary education can (positively) affect the detection and interpretation of fake news on social media. A literature review (RQ1) is conducted to support a cross-sectional analytic survey design. First, I will evaluate and adapt several existing scales for measuring narrative competence [e.g., 14]. I will assess the reliability and validity of the resultant narrative competence scale by a first online survey administered to university students (N = 100) (RQ2). After this, I design a second survey with an online misinformation task, distributed among students (N = 100), divided in two groups which are expected to differ in narrative competence (based on their educational profile) (RQ3). They are invited to read and evaluate three texts with narrative characteristics on social media (I use existing platforms like Instagram) with a focus on the trustworthiness of these texts. I will assess recall (how well they remember parts of the text) and comprehension. In addition, my survey will measure familiarity with literature (through an Author Recognition Test) [15], narrative competence (using my own scale), and ToM, as well as potentially salient personality traits of the respondents as indicated by the (revised) theoretical framework. After the data collection phase, I will prepare and process the data and analyze the differences, by means of analyses of variance (ANCOVA), in assessment of trustworthiness between the two groups and three texts.

[Work Plan]

Literature review Sept-Nov 2023
Research visit at narratology department, focus group on narrative competence Nov 2023
Design narrative competence scale Nov 2023 – Jan 2024
Assessment of the reliability and validity of the narrative competence scale Jan 2024
Finish article 1 (on assessment of narrative competence) Feb 2024
Recruitment of participants Feb 2024
Data collection March-April 2024
Data analysis April-May 2024
Present paper at international conference Between June & August 2024
Finish article 2 (on effect narrative competence on online misinformation) Sept 2024



[1] Examples: (i) Yann Martel’s Life of Pi; (ii) Mark Haddon’s The Curious Incident of the Dog in the Night-Time, told by a boy with Asperger’s; (iii) Lolita’s Humbert Humbert. (Un)reliability can also remain ambiguous: one can either interpret the governess in James’ The Turn of the Screw as the reliable narrator of a ghost story, or as an unreliable narrator who misinterprets events.

Literature references

  1. Walter, N., H. Bilandzic, N. Schwarz, and J. J. Brooks (2021). Metacognitive approach to narrative persuasion: the desirable and undesirable consequences of narrative disfluency. Media Psychology 24(5), p. 713–739. doi:10.1080/15213269.2020.1789477
  2. Bird, , M. Gretton, R. Cockerell, & A. Heathcote (2019). The cognitive load of narrative lies. Applied cognitive psychology 33(5), p. 936–942. doi:10.1002/acp.3567
  3. Page, R. (2018). Narratives Online. Shared Stories in Social Media. Cambridge, UK: Cambridge UP.
  4. Zerweck, B. (2019). The ‘Death’ of the Unreliable Narrator: Toward a Functional History of Narrative Unreliability. in Narrative in Culture. A. Erll and R. Sommer. Berlin: De Gruyter, p. 215–240. doi:10.1515/9783110654370-013
  5. Jacke, J. (2019). Systematik unzuverlässigen Erzählens: Analytische Aufarbeitung und Explikation einer problematischen Kategorie. Series Narratologia, vol. 66. Berlin: De Gruyter.
  6. Fludernik, M. (2018). Towards a ‘Natural’ Narratology Twenty Years After. Partial Answers: Journal of Literature and the History of Ideas 16(2), p. 329–347. doi:10.1353/pan.2018.0023
  7. Culler, J. (2018). Naturalization in “Natural” Narratology. Partial Answers: Journal of Literature and the History of Ideas 16(2), p. 243–249. doi:10.1353/pan.2018.0015
  8. Phelan, J. (2017). Reliable, Unreliable, and Deficient Narration: A Rhetorical Account. Narrative Culture 4(1), p. 89–103. doi:10.13110/narrcult.4.1.0089
  9. Zunshine, L. (2019). What Mary Poppins Knew: Theory of Mind, Children’s Literature, History. Narrative 27(1), p. 1–129.
  10. De Mulder, H. N. M., F. Hakemulder, F. Klaassen, C. M. M. Junge, H. Hoijtink, and J. J. A. van Berkum (2022). Figuring Out What They Feel: Exposure to Eudaimonic Narrative Fiction Is Related to Mentalizing Ability. Psychology of Aesthetics, Creativity, and the Arts 16(2), p. 242–258. doi:10.1037/aca0000428
  11. De Mulder, H. N. M., F. Hakemulder, R. van den Berghe, F. Klaassen, and J. J. A. van Berkum (2017). Effects of exposure to literary narrative fiction: From book smart to street smart? Scientific Study of Literature 7(1), p. 129–169. doi:10.1075/ssol.7.1.06dem
  12. Oey, L. A., A. Schachner, and E. Vul (2019). Designing good deception: Recursive theory of mind in lying and lie detection. doi:10.31234/
  13. Gentina, E., R. Chen, and Z. Yang (2021). Development of theory of mind on online social networks: Evidence from Facebook, Twitter, Instagram, and Snapchat. Journal of Business Research 124, p. 652-666. doi:10.1016/S0885-2006(99)80036-2
  14. Waldis, M., J. Hodel, and H. Thünemann (2015). Material-Based and Open-Ended Writing Tasks for Assessing Narrative Competence among Students. New Directions in Assessing Historical Thinking. Eds. K. Ercikan and P. Seixas. New York: Routledge. 171–131.
  15. Wimmer, L., & Ferguson, H. J. (2022). Testing the validity of a self-report scale, author recognition test, and book counting as measures of lifetime exposure to print fiction. Behavior Research Methods, 1– doi:10.3758/s13428-021-01784-2