Exploring strategy differences between humans and monkeys with recurrent neural networks
Copyright: This is an open access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication..
Animal models are used to understand principles of human biology. Within cognitive neuroscience, non-human primates are considered the premier model for studying decision-making behaviors in which direct manipulation experiments are still possible. Some prominent studies have brought to light major discrepancies between monkey and human cognition, highlighting problems with unverified extrapolation from monkey to human. Here, we use a parallel model system-artificial neural networks (ANNs)-to investigate a well-established discrepancy identified between monkeys and humans with a working memory task, in which monkeys appear to use a recency-based strategy while humans use a target-selective strategy. We find that ANNs trained on the same task exhibit a progression of behavior from random behavior (untrained) to recency-like behavior (partially trained) and finally to selective behavior (further trained), suggesting monkeys and humans may occupy different points in the same overall learning progression. Surprisingly, what appears to be recency-like behavior in the ANN, is in fact an emergent non-recency-based property of the organization of the neural network's state space during its development through training. We find that explicit encouragement of recency behavior during training has a dual effect, not only causing an accentuated recency-like behavior, but also speeding up the learning process altogether, resulting in an efficient shaping mechanism to achieve the optimal strategy. Our results suggest a new explanation for the discrepency observed between monkeys and humans and reveal that what can appear to be a recency-based strategy in some cases may not be recency at all.
Medienart: |
E-Artikel |
---|
Erscheinungsjahr: |
2023 |
---|---|
Erschienen: |
2023 |
Enthalten in: |
Zur Gesamtaufnahme - volume:19 |
---|---|
Enthalten in: |
PLoS computational biology - 19(2023), 11 vom: 06. Nov., Seite e1011618 |
Sprache: |
Englisch |
---|
Beteiligte Personen: |
Tsuda, Ben [VerfasserIn] |
---|
Links: |
---|
Themen: |
---|
Anmerkungen: |
Date Completed 06.12.2023 Date Revised 06.12.2023 published: Electronic-eCollection Citation Status MEDLINE |
---|
doi: |
10.1371/journal.pcbi.1011618 |
---|
funding: |
|
---|---|
Förderinstitution / Projekttitel: |
|
PPN (Katalog-ID): |
NLM364747714 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | NLM364747714 | ||
003 | DE-627 | ||
005 | 20231226100056.0 | ||
007 | cr uuu---uuuuu | ||
008 | 231226s2023 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1371/journal.pcbi.1011618 |2 doi | |
028 | 5 | 2 | |a pubmed24n1215.xml |
035 | |a (DE-627)NLM364747714 | ||
035 | |a (NLM)37983250 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Tsuda, Ben |e verfasserin |4 aut | |
245 | 1 | 0 | |a Exploring strategy differences between humans and monkeys with recurrent neural networks |
264 | 1 | |c 2023 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ƒaComputermedien |b c |2 rdamedia | ||
338 | |a ƒa Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Date Completed 06.12.2023 | ||
500 | |a Date Revised 06.12.2023 | ||
500 | |a published: Electronic-eCollection | ||
500 | |a Citation Status MEDLINE | ||
520 | |a Copyright: This is an open access article, free of all copyright, and may be freely reproduced, distributed, transmitted, modified, built upon, or otherwise used by anyone for any lawful purpose. The work is made available under the Creative Commons CC0 public domain dedication. | ||
520 | |a Animal models are used to understand principles of human biology. Within cognitive neuroscience, non-human primates are considered the premier model for studying decision-making behaviors in which direct manipulation experiments are still possible. Some prominent studies have brought to light major discrepancies between monkey and human cognition, highlighting problems with unverified extrapolation from monkey to human. Here, we use a parallel model system-artificial neural networks (ANNs)-to investigate a well-established discrepancy identified between monkeys and humans with a working memory task, in which monkeys appear to use a recency-based strategy while humans use a target-selective strategy. We find that ANNs trained on the same task exhibit a progression of behavior from random behavior (untrained) to recency-like behavior (partially trained) and finally to selective behavior (further trained), suggesting monkeys and humans may occupy different points in the same overall learning progression. Surprisingly, what appears to be recency-like behavior in the ANN, is in fact an emergent non-recency-based property of the organization of the neural network's state space during its development through training. We find that explicit encouragement of recency behavior during training has a dual effect, not only causing an accentuated recency-like behavior, but also speeding up the learning process altogether, resulting in an efficient shaping mechanism to achieve the optimal strategy. Our results suggest a new explanation for the discrepency observed between monkeys and humans and reveal that what can appear to be a recency-based strategy in some cases may not be recency at all | ||
650 | 4 | |a Journal Article | |
700 | 1 | |a Richmond, Barry J |e verfasserin |4 aut | |
700 | 1 | |a Sejnowski, Terrence J |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t PLoS computational biology |d 2005 |g 19(2023), 11 vom: 06. Nov., Seite e1011618 |w (DE-627)NLM15722645X |x 1553-7358 |7 nnns |
773 | 1 | 8 | |g volume:19 |g year:2023 |g number:11 |g day:06 |g month:11 |g pages:e1011618 |
856 | 4 | 0 | |u http://dx.doi.org/10.1371/journal.pcbi.1011618 |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a GBV_NLM | ||
951 | |a AR | ||
952 | |d 19 |j 2023 |e 11 |b 06 |c 11 |h e1011618 |