TitleCoarse-to-fine vision-based localization by indexing scale-invariant features
AuthorsWang, Junqiu
Zha, Hongbin
Cipolla, Roberto
AffiliationPeking Univ, Natl Lab Machine Percept, Beijing 100871, Peoples R China.
Univ Cambridge, Dept Engn, Cambridge CB2 1PZ, England.
Keywordscoarse-to-fine localization
scale-invariant features
vector space model
visual vocabulary
Issue Date2006
Publisherieee系统人和控制论汇刊 b辑
CitationIEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS.2006,36,(2),413-422.
AbstractThis paper presents a novel coarse-to-fine global localization approach inspired by object recognition and text retrieval techniques. Harris-Laplace interest points characterized by scale-invariant transformation feature descriptors are used as natural landmarks. They are indexed into two databases: a location vector space model (LVSM) and a location database. The localization process consists of two stages: coarse localization and fine localization. Coarse localization from the LVSM is fast, but not accurate enough, whereas localization from the location database using a voting algorithm is relatively slow, but more accurate. The integration of coarse and fine stages makes fast and reliable localization possible. If necessary., the localization result can be verified by epipolar geometry between the representative view in the database and the view to be localized. In addition, the localization system recovers the position of the camera by essential matrix decomposition. The localization system has been tested in indoor and outdoor environments. The results show that our approach is efficient and reliable.
URIhttp://hdl.handle.net/20.500.11897/153029
ISSN1083-4419
DOI10.1109/TSMCB.2005.859085
IndexedSCI(E)
EI
Appears in Collections:机器感知与智能教育部重点实验室

Web of Science®



Checked on Last Week

Scopus®



Checked on Current Time

百度学术™



Checked on Current Time

Google Scholar™





License: See PKU IR operational policies.