IRAF-SLAM: An Illumination-Robust and Adaptive Feature-Culling Front-End for Visual SLAM in Challenging Environments


Thanh Nguyen Canh
Bao Nguyen Quoc
HaoLan Zhang
Bupesh Rethinam Veeraiah
Xiem HoangVan
Nak Young Chong
School of Information Science, JAIST, Japan
VNU University of Engineering and Technology, Vietnam
ECMR, 2025.

[Paper]
[Code]


Robust Visual SLAM (vSLAM) is essential for autonomous systems operating in real-world environments, where challenges such as dynamic objects, low texture, and critically, varying illumination conditions often degrade performance. Existing feature-based SLAM systems rely on fixed front-end parameters, making them vulnerable to sudden lighting changes and unstable feature tracking. To address these challenges, we propose \textit{IRAF-SLAM}, an Illumination-Robust and Adaptive Feature-Culling front-end designed to enhance vSLAM resilience in complex and challenging environments. Our approach introduces: \textit{(1)} an image enhancement scheme to preprocess and adjust image quality under varying lighting conditions; \textit{(2)} an adaptive feature extraction mechanism that dynamically adjusts detection sensitivity based on image entropy, pixel intensity, and gradient analysis; and \textit{(3)} a feature culling strategy that filters out unreliable feature points using density distribution analysis and a lighting impact factor. Comprehensive evaluations on the TUM and European Robotics Challenge (EuRoC) datasets demonstrate that IRAF-SLAM significantly reduces tracking failures and achieves superior trajectory accuracy compared to state-of-the-art vSLAM methods under adverse illumination conditions. These results highlight the effectiveness of adaptive front-end strategies in improving vSLAM robustness without incurring significant computational overhead. The implementation of IRAF-SLAM is publicly available at~\url{https://thanhnguyencanh.github.io/IRAF-SLAM/}


Paper

Thanh Nguyen Canh, Van-Truong Nguyen, Xiem HoangVan, Armagan Elibol, Nak Young Chong

IRAF-SLAM: An Illumination-Robust and Adaptive Feature-Culling Front-End for Visual SLAM in Challenging Environments

ECMR 2025.

[pdf]    

Overview and Results



Overview




Proposed S3M SLAM Architecture: The system is composed of three units: a full 6 DoF pose estimation of the drone, a 3D semantic segmentation branch, and a semantic fusion scheme.



3D Pose Estimation.



Structure and method for semantic extraction



To fuse semantic information from multiple view, we introduced semantic fusion scheme.


Experiments




Comparison of trajectory for ORB-SLAM2, Our system and ground truth in X-Y and X-Z axis.



Comparison of Relative Rose Error (RPE) between ORB-SLAM2 and our system.



3D visual representation of the obtained semantic maps.

Code


 [github]


Citation


1. Canh T. N., Quoc N. B., Zhang H., Bupesh R.V. HoangVan X., Chong A.Y. IRAF-SLAM: An Illumination-Robust and Adaptive Feature-Culling Front-End for Visual SLAM in Challenging Environments. European Conference on Mobile Robots (ECMR), 2025.

@inproceedings{canh2025s3m,
author = {Canh, Thanh Nguyen and Quoc, Bao Nguyen and Zhang, Hao Lan and Veeraiah, Bupesh Rethinam and HoangVan, Xiem and Chong, Nak Young},
title = {{ IRAF-SLAM: An Illumination-Robust and Adaptive Feature-Culling Front-End for Visual SLAM in Challenging Environments}},
booktitle = {)},
year = {2025},
address = {},
month = {},
DOI = {}
}




Acknowledgements

This work was supported by JST SPRING, Japan Grant Number JPMJSP2102.
This webpage template was borrowed from https://akanazawa.github.io/cmr/.