Intelligent Robot Navigation Using Deep Reinforcement Learning for Dynamic Environments
Main Article Content
Abstract
Autonomous mobile robots operating in dynamic environments must effectively perceive environmental changes and make adaptive navigation decisions in real time. Traditional path planning algorithms often rely on static environment assumptions, which limits their performance in real-world scenarios where obstacles and environmental conditions continuously change. This paper proposes a deep reinforcement learning-based navigation framework that integrates perception, decision-making, and motion control into a unified learning architecture. The system employs convolutional neural networks to extract spatial features from sensor observations and utilizes a reinforcement learning policy network to generate navigation actions. A simulation-based evaluation environment is constructed to analyze navigation efficiency, collision rate, and trajectory stability. Experimental results demonstrate that the proposed framework achieves improved adaptability and higher success rates compared with classical navigation methods in dynamic environments.