Incremental Accumulation of Linguistic Context in Artificial and Biological Neural Networks

ABSTRACT Accumulated evidence suggests that Large Language Models (LLMs) are beneficial in predicting neural signals related to narrative processing. The way LLMs integrate context over large timescales, however, is fundamentally different from the way the brain does it. In this study, we show that unlike LLMs that apply parallel processing of large contextual windows, the incoming context to the brain is limited to short windows of a few tens of words. We hypothesize that whereas lower<jats:underline>-</jats:underline>level brain areas process short contextual windows, higher-order areas in the default-mode network (DMN) engage in an online incremental mechanism where the incoming short context is summarized and integrated with information accumulated across long timescales. Consequently, we introduce a novel LLM that instead of processing the entire context at once, it incrementally generates a concise summary of previous information. As predicted, we found that neural activities at the DMN were better predicted by the incremental model, and conversely, lower-level areas were better predicted with short-context-window LLM..

Medienart:

Preprint

Erscheinungsjahr:

2024

Erschienen:

2024

Enthalten in:

bioRxiv.org - (2024) vom: 22. Jan. Zur Gesamtaufnahme - year:2024

Sprache:

Englisch

Beteiligte Personen:

Tikochinski, Refael [VerfasserIn]
Goldstein, Ariel [VerfasserIn]
Meiri, Yoav [VerfasserIn]
Hasson, Uri [VerfasserIn]
Reichart, Roi [VerfasserIn]

Links:

Volltext [kostenfrei]

Themen:

570
Biology

doi:

10.1101/2024.01.15.575798

funding:

Förderinstitution / Projekttitel:

PPN (Katalog-ID):

XBI042179580