This paper proposes a generalizable, end-to-end deep learning-based method for relative pose regression between two images. Given two images of the same scene captured from different viewpoints, our method predicts the relative rotation and translation (including direction and scale) between the two respective cameras.
Inspired by the classical pipeline, our method leverages Image Matching (IM) as a pre-trained task for relative pose regression. Specifically, we use LoFTR, an architecture that utilizes an attention-based network pre-trained on Scannet, to extract semi-dense feature maps, which are then warped and fed into a pose regression network. Notably, we use a loss function that utilizes separate terms to account for the translation direction and scale. We believe such a separation is important because translation direction is determined by point correspondences while the scale is inferred from prior on shape sizes. Our ablations further support this choice.
We evaluate our method on several datasets and show that it outperforms previous end-to-end methods. The method also generalizes well to unseen datasets.
@misc{khatib2024leveraging,
title={Leveraging Image Matching Toward End-to-End Relative Camera Pose Regression},
author={Fadi Khatib and Yuval Margalit and Meirav Galun and Ronen Basri},
year={2024},
eprint={2211.14950},
archivePrefix={arXiv},
primaryClass={cs.CV}
}