Hybrid Deep Learning and Reinforcement Mechanisms for Adaptive Robot Navigation in Dynamic Environments

Main Article Content

Ben Amor

Abstract

In recent years, robotic navigation has increasingly relied on deep learning-based control systems that can autonomously adapt to dynamic and uncertain environments. Traditional methods such as SLAM and PID controllers have struggled to generalize across unseen scenarios, while reinforcement learning (RL) has shown strong potential for continuous adaptation. However, pure RL methods often face convergence instability and sample inefficiency in real-world training. To address these issues, this paper proposes a Hybrid Deep Learning and Reinforcement Mechanism (HDLRM) framework that combines deep convolutional feature extraction with reinforcement-based policy adaptation. The proposed model employs a dual-branch architecture integrating a Convolutional Neural Network (CNN) for spatial understanding and a Deep Q-Network (DQN) for decision optimization. The CNN extracts semantic-rich environmental representations from multi-modal sensor inputs, while the DQN refines navigation policies through trial-based learning. A dynamic memory replay buffer and an adaptive reward modulation strategy are incorporated to stabilize training and enhance decision consistency. Experiments conducted on both simulated (Gazebo) and real-world mobile robots demonstrate superior performance in path efficiency, obstacle avoidance, and energy optimization compared to baseline methods. The HDLRM framework achieves an average success rate improvement of 11.7% and reduces collision events by 23.5% across dynamic obstacle scenarios. These results confirm that hybrid deep learning and reinforcement mechanisms can effectively improve the robustness and adaptability of autonomous robot navigation systems in complex and changing environments.

Article Details

Section

Articles