Learning hierarchically-structured concepts
Copyright © 2021 Elsevier Ltd. All rights reserved..
We use a recently developed synchronous Spiking Neural Network (SNN) model to study the problem of learning hierarchically-structured concepts. We introduce an abstract data model that describes simple hierarchical concepts. We define a feed-forward layered SNN model, with learning modeled using Oja's local learning rule, a well known biologically-plausible rule for adjusting synapse weights. We define what it means for such a network to recognize hierarchical concepts; our notion of recognition is robust, in that it tolerates a bounded amount of noise. Then, we present a learning algorithm by which a layered network may learn to recognize hierarchical concepts according to our robust definition. We analyze correctness and performance rigorously; the amount of time required to learn each concept, after learning all of the sub-concepts, is approximately O1ηkℓmaxlog(k)+1ɛ+blog(k), where k is the number of sub-concepts per concept, ℓmax is the maximum hierarchical depth, η is the learning rate, ɛ describes the amount of uncertainty allowed in robust recognition, and b describes the amount of weight decrease for "irrelevant" edges. An interesting feature of this algorithm is that it allows the network to learn sub-concepts in a highly interleaved manner. This algorithm assumes that the concepts are presented in a noise-free way; we also extend these results to accommodate noise in the learning process. Finally, we give a simple lower bound saying that, in order to recognize concepts with hierarchical depth two with noise-tolerance, a neural network should have at least two layers. The results in this paper represent first steps in the theoretical study of hierarchical concepts using SNNs. The cases studied here are basic, but they suggest many directions for extensions to more elaborate and realistic cases.
Medienart: |
E-Artikel |
---|
Erscheinungsjahr: |
2021 |
---|---|
Erschienen: |
2021 |
Enthalten in: |
Zur Gesamtaufnahme - volume:143 |
---|---|
Enthalten in: |
Neural networks : the official journal of the International Neural Network Society - 143(2021) vom: 02. Nov., Seite 798-817 |
Sprache: |
Englisch |
---|
Beteiligte Personen: |
Lynch, Nancy [VerfasserIn] |
---|
Links: |
---|
Themen: |
Brain-inspired algorithms |
---|
Anmerkungen: |
Date Completed 24.11.2021 Date Revised 24.11.2021 published: Print-Electronic Citation Status MEDLINE |
---|
doi: |
10.1016/j.neunet.2021.07.033 |
---|
funding: |
|
---|---|
Förderinstitution / Projekttitel: |
|
PPN (Katalog-ID): |
NLM330304208 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | NLM330304208 | ||
003 | DE-627 | ||
005 | 20231225211008.0 | ||
007 | cr uuu---uuuuu | ||
008 | 231225s2021 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1016/j.neunet.2021.07.033 |2 doi | |
028 | 5 | 2 | |a pubmed24n1100.xml |
035 | |a (DE-627)NLM330304208 | ||
035 | |a (NLM)34488015 | ||
035 | |a (PII)S0893-6080(21)00304-X | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Lynch, Nancy |e verfasserin |4 aut | |
245 | 1 | 0 | |a Learning hierarchically-structured concepts |
264 | 1 | |c 2021 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ƒaComputermedien |b c |2 rdamedia | ||
338 | |a ƒa Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Date Completed 24.11.2021 | ||
500 | |a Date Revised 24.11.2021 | ||
500 | |a published: Print-Electronic | ||
500 | |a Citation Status MEDLINE | ||
520 | |a Copyright © 2021 Elsevier Ltd. All rights reserved. | ||
520 | |a We use a recently developed synchronous Spiking Neural Network (SNN) model to study the problem of learning hierarchically-structured concepts. We introduce an abstract data model that describes simple hierarchical concepts. We define a feed-forward layered SNN model, with learning modeled using Oja's local learning rule, a well known biologically-plausible rule for adjusting synapse weights. We define what it means for such a network to recognize hierarchical concepts; our notion of recognition is robust, in that it tolerates a bounded amount of noise. Then, we present a learning algorithm by which a layered network may learn to recognize hierarchical concepts according to our robust definition. We analyze correctness and performance rigorously; the amount of time required to learn each concept, after learning all of the sub-concepts, is approximately O1ηkℓmaxlog(k)+1ɛ+blog(k), where k is the number of sub-concepts per concept, ℓmax is the maximum hierarchical depth, η is the learning rate, ɛ describes the amount of uncertainty allowed in robust recognition, and b describes the amount of weight decrease for "irrelevant" edges. An interesting feature of this algorithm is that it allows the network to learn sub-concepts in a highly interleaved manner. This algorithm assumes that the concepts are presented in a noise-free way; we also extend these results to accommodate noise in the learning process. Finally, we give a simple lower bound saying that, in order to recognize concepts with hierarchical depth two with noise-tolerance, a neural network should have at least two layers. The results in this paper represent first steps in the theoretical study of hierarchical concepts using SNNs. The cases studied here are basic, but they suggest many directions for extensions to more elaborate and realistic cases | ||
650 | 4 | |a Journal Article | |
650 | 4 | |a Brain-inspired algorithms | |
650 | 4 | |a Hierarchical concepts | |
650 | 4 | |a Learning hierarchical concepts | |
650 | 4 | |a Recognizing hierarchical concepts | |
650 | 4 | |a Representing hierarchical concepts | |
650 | 4 | |a Spiking Neural Networks | |
700 | 1 | |a Mallmann-Trenn, Frederik |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Neural networks : the official journal of the International Neural Network Society |d 1996 |g 143(2021) vom: 02. Nov., Seite 798-817 |w (DE-627)NLM087746824 |x 1879-2782 |7 nnns |
773 | 1 | 8 | |g volume:143 |g year:2021 |g day:02 |g month:11 |g pages:798-817 |
856 | 4 | 0 | |u http://dx.doi.org/10.1016/j.neunet.2021.07.033 |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a GBV_NLM | ||
951 | |a AR | ||
952 | |d 143 |j 2021 |b 02 |c 11 |h 798-817 |