IRAF-SLAM: An Illumination-Robust and Adaptive Feature-Culling Front-End for Visual SLAM in Challenging Environments


Thanh Nguyen Canh
Bao Nguyen Quoc
HaoLan Zhang
Bupesh Rethinam Veeraiah
Xiem HoangVan
Nak Young Chong
School of Information Science, JAIST, Japan
VNU University of Engineering and Technology, Vietnam
ECMR, 2025.

[Paper]
[Code]


Robust Visual SLAM (vSLAM) is essential for autonomous systems operating in real-world environments, where challenges such as dynamic objects, low texture, and critically, varying illumination conditions often degrade performance. Existing feature-based SLAM systems rely on fixed front-end parameters, making them vulnerable to sudden lighting changes and unstable feature tracking. To address these challenges, we propose ``IRAF-SLAM'', an Illumination-Robust and Adaptive Feature-Culling front-end designed to enhance vSLAM resilience in complex and challenging environments. Our approach introduces: (1) an image enhancement scheme to preprocess and adjust image quality under varying lighting conditions; (2) an adaptive feature extraction mechanism that dynamically adjusts detection sensitivity based on image entropy, pixel intensity, and gradient analysis; and (3) a feature culling strategy that filters out unreliable feature points using density distribution analysis and a lighting impact factor. Comprehensive evaluations on the TUM and European Robotics Challenge (EuRoC) datasets demonstrate that IRAF-SLAM significantly reduces tracking failures and achieves superior trajectory accuracy compared to state-of-the-art vSLAM methods under adverse illumination conditions. These results highlight the effectiveness of adaptive front-end strategies in improving vSLAM robustness without incurring significant computational overhead.


Paper

Thanh Nguyen Canh, Bao Nguyen Quoc, HaoLan Zhang, Bupesh Rethinam Veeraiah, Xiem HoangVan, Nak Young Chong

IRAF-SLAM: An Illumination-Robust and Adaptive Feature-Culling Front-End for Visual SLAM in Challenging Environments

Submitted to ECMR 2025.

[pdf]    

Overview and Results



Overview



Settings


Overview of the proposed IRAF-SLAM Architecture: The system comprises three core modules: Image Preprocessin - enhances input image quality to improve feature visibility under varying illumination conditions; Adaptive Thresholding - dynamically adjusts FAST detector sensitivity based on scene characteristics; and Feature Culling - filters out unstable features prior to tracking, mapping, and loop closing within the ORB-SLAM3 framework.


Experiments Results




Number of Keypoints based on FAST threshold on LoL low-light dataset.
The example of feature extraction results: ORB-SLAM3 (left), Our method (right).


The example of feature matching results: ORB-SLAM3 (left), Our method (right).

omparison on the Euroc dataset for the RMSE ATE (m) with available ground-truth data.



omparison on the TUM-VI dataset for the mean ATE (m) and RMSE ATE (m) using monocular mode.
The comparison of trajectory for ORB-SLAM3, our method, and ground truth: EuRoc Dataset (left), TUM dataset (right).




The comparison of trajectory in X, Y, Z axis for ORB-SLAM3, our method, and ground truth: EuRoc Dataset (left), TUM dataset (right).







Absolute Pose Error of ORB-SLAM3 and Our method in Euroc Dataset.






Absolute Pose Error of ORB-SLAM3 and Our method in Euroc Dataset.







Absolute Pose Error of ORB-SLAM3 and Our method in TUM Dataset.






Absolute Pose Error of ORB-SLAM3 and Our method in TUM Dataset.







Absolute Pose Error of ORB-SLAM3 and Our method in Euroc Dataset.






Absolute Pose Error of ORB-SLAM3 and Our method in Euroc Dataset.







Absolute Pose Error of ORB-SLAM3 and Our method in TUM Dataset.






Absolute Pose Error of ORB-SLAM3 and Our method in TUM Dataset.








Trajectory Comparison.








Trajectory Comparison.








Trajectory Comparison.








Trajectory Comparison.

Code


 [github]


Citation


1. Canh T. N., Quoc N. B., Zhang H., Bupesh R.V. HoangVan X., Chong A.Y. IRAF-SLAM: An Illumination-Robust and Adaptive Feature-Culling Front-End for Visual SLAM in Challenging Environments. submitted to European Conference on Mobile Robots (ECMR), 2025.

@inproceedings{canh2025s3m,
author = {Canh, Thanh Nguyen and Quoc, Bao Nguyen and Zhang, Hao Lan and Veeraiah, Bupesh Rethinam and HoangVan, Xiem and Chong, Nak Young},
title = {{ IRAF-SLAM: An Illumination-Robust and Adaptive Feature-Culling Front-End for Visual SLAM in Challenging Environments}},
booktitle = {)},
year = {2025},
address = {},
month = {},
DOI = {}
}




Acknowledgements

This work was supported in part by JST SPRING, Japan Grant Number JPMJSP2102, and in part by the Asian Office of Aerospace Research and Development under Grant/Cooperative Agreement Award FA2386-22-1-4042.
This webpage template was borrowed from https://akanazawa.github.io/cmr/.