Learning sensorimotor control with neuromorphic sensors : Toward hyperdimensional active perception
Copyright © 2019 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works..
The hallmark of modern robotics is the ability to directly fuse the platform's perception with its motoric ability-the concept often referred to as "active perception." Nevertheless, we find that action and perception are often kept in separated spaces, which is a consequence of traditional vision being frame based and only existing in the moment and motion being a continuous entity. This bridge is crossed by the dynamic vision sensor (DVS), a neuromorphic camera that can see the motion. We propose a method of encoding actions and perceptions together into a single space that is meaningful, semantically informed, and consistent by using hyperdimensional binary vectors (HBVs). We used DVS for visual perception and showed that the visual component can be bound with the system velocity to enable dynamic world perception, which creates an opportunity for real-time navigation and obstacle avoidance. Actions performed by an agent are directly bound to the perceptions experienced to form its own "memory." Furthermore, because HBVs can encode entire histories of actions and perceptions-from atomic to arbitrary sequences-as constant-sized vectors, autoassociative memory was combined with deep learning paradigms for controls. We demonstrate these properties on a quadcopter drone ego-motion inference task and the MVSEC (multivehicle stereo event camera) dataset.
Medienart: |
E-Artikel |
---|
Erscheinungsjahr: |
2019 |
---|---|
Erschienen: |
2019 |
Enthalten in: |
Zur Gesamtaufnahme - volume:4 |
---|---|
Enthalten in: |
Science robotics - 4(2019), 30 vom: 15. Mai |
Sprache: |
Englisch |
---|
Beteiligte Personen: |
Mitrokhin, A [VerfasserIn] |
---|
Links: |
---|
Themen: |
Journal Article |
---|
Anmerkungen: |
Date Completed 11.01.2021 Date Revised 11.01.2021 published: Print Citation Status PubMed-not-MEDLINE |
---|
doi: |
10.1126/scirobotics.aaw6736 |
---|
funding: |
|
---|---|
Förderinstitution / Projekttitel: |
|
PPN (Katalog-ID): |
NLM317060619 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | NLM317060619 | ||
003 | DE-627 | ||
005 | 20231225162434.0 | ||
007 | cr uuu---uuuuu | ||
008 | 231225s2019 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.1126/scirobotics.aaw6736 |2 doi | |
028 | 5 | 2 | |a pubmed24n1056.xml |
035 | |a (DE-627)NLM317060619 | ||
035 | |a (NLM)33137724 | ||
035 | |a (PII)eaaw6736 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Mitrokhin, A |e verfasserin |4 aut | |
245 | 1 | 0 | |a Learning sensorimotor control with neuromorphic sensors |b Toward hyperdimensional active perception |
264 | 1 | |c 2019 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ƒaComputermedien |b c |2 rdamedia | ||
338 | |a ƒa Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Date Completed 11.01.2021 | ||
500 | |a Date Revised 11.01.2021 | ||
500 | |a published: Print | ||
500 | |a Citation Status PubMed-not-MEDLINE | ||
520 | |a Copyright © 2019 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works. | ||
520 | |a The hallmark of modern robotics is the ability to directly fuse the platform's perception with its motoric ability-the concept often referred to as "active perception." Nevertheless, we find that action and perception are often kept in separated spaces, which is a consequence of traditional vision being frame based and only existing in the moment and motion being a continuous entity. This bridge is crossed by the dynamic vision sensor (DVS), a neuromorphic camera that can see the motion. We propose a method of encoding actions and perceptions together into a single space that is meaningful, semantically informed, and consistent by using hyperdimensional binary vectors (HBVs). We used DVS for visual perception and showed that the visual component can be bound with the system velocity to enable dynamic world perception, which creates an opportunity for real-time navigation and obstacle avoidance. Actions performed by an agent are directly bound to the perceptions experienced to form its own "memory." Furthermore, because HBVs can encode entire histories of actions and perceptions-from atomic to arbitrary sequences-as constant-sized vectors, autoassociative memory was combined with deep learning paradigms for controls. We demonstrate these properties on a quadcopter drone ego-motion inference task and the MVSEC (multivehicle stereo event camera) dataset | ||
650 | 4 | |a Journal Article | |
650 | 4 | |a Research Support, Non-U.S. Gov't | |
650 | 4 | |a Research Support, U.S. Gov't, Non-P.H.S. | |
700 | 1 | |a Sutor, P |e verfasserin |4 aut | |
700 | 1 | |a Fermüller, C |e verfasserin |4 aut | |
700 | 1 | |a Aloimonos, Y |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Science robotics |d 2016 |g 4(2019), 30 vom: 15. Mai |w (DE-627)NLM288825853 |x 2470-9476 |7 nnns |
773 | 1 | 8 | |g volume:4 |g year:2019 |g number:30 |g day:15 |g month:05 |
856 | 4 | 0 | |u http://dx.doi.org/10.1126/scirobotics.aaw6736 |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a GBV_NLM | ||
951 | |a AR | ||
952 | |d 4 |j 2019 |e 30 |b 15 |c 05 |