How do we understand machines that talk to us?
How do people interact with machines that use large language models (LLMs)? Do we interpret the ‘utterances’ of LLMs in the same way we understand each other?
Many scholars rely on Gricean inferentialism (Grice 1957, 1967) as a fundamental aspect of human communication. According to this perspective, understanding verbal utterances, whether spoken or written, involves making inferences to determine the speaker’s intended meaning. This process requires integrating contextual information and background knowledge with the syntactic structure obtained from parsing sentences.
How can we effectively engage in conversational exchanges with LLMs?
However, this raises a perplexing question: How can we effectively engage in conversational exchanges with individuals who lack communicative intentions, such as LLMs?
By combining both theoretical and experimental research, the project aims to answer several key questions:
- Is inferentialism about communication correct for our interactions with LLMs?
- Is a unified account possible of the interpretation of human and LLM utterances or are they understood in fundamentally different ways?
- How do children perceive their interactions with LLMs?
Project: How do we understand machines that talk to us?
Partner: University of Oslo
Funding: University of Oslo
Period: 2024 – 2028
Project team
- Ingrid Lossius Falkum (UiO)
- Nicholas Elwyn Allott (UiO)
- Helle Bollingmo (UiO)
- Pierre Lison (NR)