UFO-Net : A Linear Attention-Based Network for Point Cloud Classification
Three-dimensional point cloud classification tasks have been a hot topic in recent years. Most existing point cloud processing frameworks lack context-aware features due to the deficiency of sufficient local feature extraction information. Therefore, we designed an augmented sampling and grouping module to efficiently obtain fine-grained features from the original point cloud. In particular, this method strengthens the domain near each centroid and makes reasonable use of the local mean and global standard deviation to extract point cloud's local and global features. In addition to this, inspired by the transformer structure UFO-ViT in 2D vision tasks, we first tried to use a linearly normalized attention mechanism in point cloud processing tasks, investigating a novel transformer-based point cloud classification architecture UFO-Net. An effective local feature learning module was adopted as a bridging technique to connect different feature extraction modules. Importantly, UFO-Net employs multiple stacked blocks to better capture feature representation of the point cloud. Extensive ablation experiments on public datasets show that this method outperforms other state-of-the-art methods. For instance, our network performed with 93.7% overall accuracy on the ModelNet40 dataset, which is 0.5% higher than PCT. Our network also achieved 83.8% overall accuracy on the ScanObjectNN dataset, which is 3.8% better than PCT.
Medienart: |
E-Artikel |
---|
Erscheinungsjahr: |
2023 |
---|---|
Erschienen: |
2023 |
Enthalten in: |
Zur Gesamtaufnahme - volume:23 |
---|---|
Enthalten in: |
Sensors (Basel, Switzerland) - 23(2023), 12 vom: 12. Juni |
Sprache: |
Englisch |
---|
Beteiligte Personen: |
He, Sheng [VerfasserIn] |
---|
Links: |
---|
Themen: |
Augmented sampling and grouping |
---|
Anmerkungen: |
Date Completed 10.07.2023 Date Revised 18.07.2023 published: Electronic Citation Status PubMed-not-MEDLINE |
---|
doi: |
10.3390/s23125512 |
---|
funding: |
|
---|---|
Förderinstitution / Projekttitel: |
|
PPN (Katalog-ID): |
NLM359217338 |
---|
LEADER | 01000naa a22002652 4500 | ||
---|---|---|---|
001 | NLM359217338 | ||
003 | DE-627 | ||
005 | 20231226080414.0 | ||
007 | cr uuu---uuuuu | ||
008 | 231226s2023 xx |||||o 00| ||eng c | ||
024 | 7 | |a 10.3390/s23125512 |2 doi | |
028 | 5 | 2 | |a pubmed24n1197.xml |
035 | |a (DE-627)NLM359217338 | ||
035 | |a (NLM)37420679 | ||
035 | |a (PII)5512 | ||
040 | |a DE-627 |b ger |c DE-627 |e rakwb | ||
041 | |a eng | ||
100 | 1 | |a He, Sheng |e verfasserin |4 aut | |
245 | 1 | 0 | |a UFO-Net |b A Linear Attention-Based Network for Point Cloud Classification |
264 | 1 | |c 2023 | |
336 | |a Text |b txt |2 rdacontent | ||
337 | |a ƒaComputermedien |b c |2 rdamedia | ||
338 | |a ƒa Online-Ressource |b cr |2 rdacarrier | ||
500 | |a Date Completed 10.07.2023 | ||
500 | |a Date Revised 18.07.2023 | ||
500 | |a published: Electronic | ||
500 | |a Citation Status PubMed-not-MEDLINE | ||
520 | |a Three-dimensional point cloud classification tasks have been a hot topic in recent years. Most existing point cloud processing frameworks lack context-aware features due to the deficiency of sufficient local feature extraction information. Therefore, we designed an augmented sampling and grouping module to efficiently obtain fine-grained features from the original point cloud. In particular, this method strengthens the domain near each centroid and makes reasonable use of the local mean and global standard deviation to extract point cloud's local and global features. In addition to this, inspired by the transformer structure UFO-ViT in 2D vision tasks, we first tried to use a linearly normalized attention mechanism in point cloud processing tasks, investigating a novel transformer-based point cloud classification architecture UFO-Net. An effective local feature learning module was adopted as a bridging technique to connect different feature extraction modules. Importantly, UFO-Net employs multiple stacked blocks to better capture feature representation of the point cloud. Extensive ablation experiments on public datasets show that this method outperforms other state-of-the-art methods. For instance, our network performed with 93.7% overall accuracy on the ModelNet40 dataset, which is 0.5% higher than PCT. Our network also achieved 83.8% overall accuracy on the ScanObjectNN dataset, which is 3.8% better than PCT | ||
650 | 4 | |a Journal Article | |
650 | 4 | |a UFO attention | |
650 | 4 | |a augmented sampling and grouping | |
650 | 4 | |a classification | |
650 | 4 | |a point cloud | |
650 | 4 | |a transformer-based | |
700 | 1 | |a Guo, Peiyao |e verfasserin |4 aut | |
700 | 1 | |a Tang, Zeyu |e verfasserin |4 aut | |
700 | 1 | |a Guo, Dongxin |e verfasserin |4 aut | |
700 | 1 | |a Wan, Lingyu |e verfasserin |4 aut | |
700 | 1 | |a Yao, Huilu |e verfasserin |4 aut | |
773 | 0 | 8 | |i Enthalten in |t Sensors (Basel, Switzerland) |d 2007 |g 23(2023), 12 vom: 12. Juni |w (DE-627)NLM187985170 |x 1424-8220 |7 nnns |
773 | 1 | 8 | |g volume:23 |g year:2023 |g number:12 |g day:12 |g month:06 |
856 | 4 | 0 | |u http://dx.doi.org/10.3390/s23125512 |3 Volltext |
912 | |a GBV_USEFLAG_A | ||
912 | |a GBV_NLM | ||
951 | |a AR | ||
952 | |d 23 |j 2023 |e 12 |b 12 |c 06 |