Lossless Encoding of Time-Aggregated Neuromorphic Vision Sensor Data Based on Point-Cloud Compression
Neuromorphic Vision Sensors (NVSs) are emerging sensors that acquire visual information asynchronously when changes occur in the scene. Their advantages versus synchronous capturing (frame-based video) include a low power consumption, a high dynamic range, an extremely high temporal resolution, and lower data rates. Although the acquisition strategy already results in much lower data rates than conventional video, NVS data can be further compressed. For this purpose, we recently proposed Time Aggregation-based Lossless Video Encoding for Neuromorphic Vision Sensor Data (TALVEN), consisting in the time aggregation of NVS events in the form of pixel-based event histograms, arrangement of the data in a specific format, and lossless compression inspired by video encoding. In this paper, we still leverage time aggregation but, rather than performing encoding inspired by frame-based video coding, we encode an appropriate representation of the time-aggregated data via point-cloud compression (similar to another one of our previous works, where time aggregation was not used). The proposed strategy, Time-Aggregated Lossless Encoding of Events based on Point-Cloud Compression (TALEN-PCC), outperforms the originally proposed TALVEN encoding strategy for the content in the considered dataset. The gain in terms of the compression ratio is the highest for low-event rate and low-complexity scenes, whereas the improvement is minimal for high-complexity and high-event rate scenes. According to experiments on outdoor and indoor spike event data, TALEN-PCC achieves higher compression gains for time aggregation intervals of more than 5 ms. However, the compression gains are lower when compared to state-of-the-art approaches for time aggregation intervals of less than 5 ms.
Medienart: |
E-Artikel |
---|
Erscheinungsjahr: |
2024 |
---|---|
Erschienen: |
2024 |
Enthalten in: |
Zur Gesamtaufnahme - volume:24 |
---|---|
Enthalten in: |
Sensors (Basel, Switzerland) - 24(2024), 5 vom: 21. Feb. |
Sprache: |
Englisch |
---|
Beteiligte Personen: |
Adhuran, Jayasingam [VerfasserIn] |
---|
Links: |
---|
Themen: |
Journal Article |
---|
Anmerkungen: |
Date Revised 15.03.2024 published: Electronic Citation Status PubMed-not-MEDLINE |
---|
doi: |
10.3390/s24051382 |
---|
funding: |
|
---|---|
Förderinstitution / Projekttitel: |
|
PPN (Katalog-ID): |
NLM36964638X |
---|
LEADER | 01000caa a22002652 4500 | ||
---|---|---|---|
001 | NLM36964638X | ||
003 | DE-627 | ||
005 | 20240315233917.0 | ||
007 | cr uuu---uuuuu | ||
008 | 240313s2024 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.3390/s24051382 |2 doi | |
028 | 5 | 2 | |a pubmed24n1330.xml |
035 | |a (DE-627)NLM36964638X | ||
035 | |a (NLM)38474918 | ||
035 | |a (PII)1382 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a Adhuran, Jayasingam |e verfasserin |4 aut | |
245 | 1 | 0 | |a Lossless Encoding of Time-Aggregated Neuromorphic Vision Sensor Data Based on Point-Cloud Compression |
264 | 1 | |c 2024 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ƒaComputermedien |b c |2 rdamedia | ||
338 | |a ƒa Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Date Revised 15.03.2024 | ||
500 | |a published: Electronic | ||
500 | |a Citation Status PubMed-not-MEDLINE | ||
520 | |a Neuromorphic Vision Sensors (NVSs) are emerging sensors that acquire visual information asynchronously when changes occur in the scene. Their advantages versus synchronous capturing (frame-based video) include a low power consumption, a high dynamic range, an extremely high temporal resolution, and lower data rates. Although the acquisition strategy already results in much lower data rates than conventional video, NVS data can be further compressed. For this purpose, we recently proposed Time Aggregation-based Lossless Video Encoding for Neuromorphic Vision Sensor Data (TALVEN), consisting in the time aggregation of NVS events in the form of pixel-based event histograms, arrangement of the data in a specific format, and lossless compression inspired by video encoding. In this paper, we still leverage time aggregation but, rather than performing encoding inspired by frame-based video coding, we encode an appropriate representation of the time-aggregated data via point-cloud compression (similar to another one of our previous works, where time aggregation was not used). The proposed strategy, Time-Aggregated Lossless Encoding of Events based on Point-Cloud Compression (TALEN-PCC), outperforms the originally proposed TALVEN encoding strategy for the content in the considered dataset. The gain in terms of the compression ratio is the highest for low-event rate and low-complexity scenes, whereas the improvement is minimal for high-complexity and high-event rate scenes. According to experiments on outdoor and indoor spike event data, TALEN-PCC achieves higher compression gains for time aggregation intervals of more than 5 ms. However, the compression gains are lower when compared to state-of-the-art approaches for time aggregation intervals of less than 5 ms | ||
650 | 4 | |a Journal Article | |
650 | 4 | |a neuromorphic spike events | |
650 | 4 | |a neuromorphic vision sensor (NVS) | |
650 | 4 | |a point-cloud compression | |
650 | 4 | |a silicon retinas | |
650 | 4 | |a spike encoding | |
700 | 1 | |a Khan, Nabeel |e verfasserin |4 aut | |
700 | 1 | |a Martini, Maria G |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Sensors (Basel, Switzerland) |d 2007 |g 24(2024), 5 vom: 21. Feb. |w (DE-627)NLM187985170 |x 1424-8220 |7 nnns |
773 | 1 | 8 | |g volume:24 |g year:2024 |g number:5 |g day:21 |g month:02 |
856 | 4 | 0 | |u http://dx.doi.org/10.3390/s24051382 |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a GBV_NLM | ||
951 | |a AR | ||
952 | |d 24 |j 2024 |e 5 |b 21 |c 02 |