EI-MVSNet : Epipolar-Guided Multi-View Stereo Network With Interval-Aware Label

Recent learning-based methods demonstrate their strong ability to estimate depth for multi-view stereo reconstruction. However, most of these methods directly extract features via regular or deformable convolutions, and few works consider the alignment of the receptive fields between views while constructing the cost volume. Through analyzing the constraint and inference of previous MVS networks, we find that there are still some shortcomings that hinder the performance. To deal with the above issues, we propose an Epipolar-Guided Multi-View Stereo Network with Interval-Aware Label (EI-MVSNet), which includes an epipolar-guided volume construction module and an interval-aware depth estimation module in a unified architecture for MVS. The proposed EI-MVSNet enjoys several merits. First, in the epipolar-guided volume construction module, we construct cost volume with features from aligned receptive fields between different pairs of reference and source images via epipolar-guided convolutions, which take rotation and scale changes into account. Second, in the interval-aware depth estimation module, we attempt to supervise the cost volume directly and make depth estimation independent of extraneous values by perceiving the upper and lower boundaries, which can achieve fine-grained predictions and enhance the reasoning ability of the network. Extensive experimental results on two standard benchmarks demonstrate that our EI-MVSNet performs favorably against state-of-the-art MVS methods. Specifically, our EI-MVSNet ranks 1st on both intermediate and advanced subsets of the Tanks and Temples benchmark, which verifies the high precision and strong robustness of our model.

Medienart:

E-Artikel

Erscheinungsjahr:

2024

Erschienen:

2024

Enthalten in:

Zur Gesamtaufnahme - volume:33

Enthalten in:

IEEE transactions on image processing : a publication of the IEEE Signal Processing Society - 33(2024) vom: 09., Seite 753-766

Sprache:

Englisch

Beteiligte Personen:

Chang, Jiahao [VerfasserIn]
He, Jianfeng [VerfasserIn]
Zhang, Tianzhu [VerfasserIn]
Yu, Jiyang [VerfasserIn]
Wu, Feng [VerfasserIn]

Links:

Volltext

Themen:

Journal Article

Anmerkungen:

Date Revised 15.01.2024

published: Print-Electronic

Citation Status PubMed-not-MEDLINE

doi:

10.1109/TIP.2023.3347929

funding:

Förderinstitution / Projekttitel:

PPN (Katalog-ID):

NLM366849786