Year
2024
Season
Spring
Paper Type
Master's Thesis
College
College of Computing, Engineering & Construction
Degree Name
Master of Science in Computer and Information Sciences (MS)
Department
Computing
NACO controlled Corporate Body
University of North Florida. School of Computing
First Advisor
Ayan Dutta, Ph.D
Second Advisor
Anirban Ghosh, Ph.D
Third Advisor
O. Patrick Kreidl, Ph.D
Department Chair
Asai Asaithambi
College Dean
William Klostermeyer
Abstract
Coverage path planning (CPP) is the problem of covering all points in an environment and is a well-researched topic in robotics due to its sheer practical relevance. This paper investigates such an offline CPP problem where the primary objective is to minimize the path length to achieve complete coverage. Furthermore, the literature suggests that taking turns leads to a higher energy use than going straight. To this end, we design a novel objective function that aims to minimize the number of turns as well. We have proposed a deep reinforcement learning (DRL)-based framework that uses a Transformer model. Unlike state-of-the-art reinforcement learning-based CPP solutions that primarily use convolutional neural networks, Transformers need a minimal number of inductive biases for their design. We tested our proposed Transformer-based DRL framework on seven 8x8 environments with varying numbers and shapes of obstacles under fully and partially observable settings. Experimental results show that our proposed solution outperforms Breadth-first search coverage solutions in terms of the number of steps and turns needed to yield 100% coverage. Furthermore, when compared to a comparable deep Q-network-based CPP technique that uses a convolutional neural network, our proposed approach always converges to higher coverage percentages and lower turn counts.
Suggested Citation
Tiu, Daniel B., "Transformer-enabled deep reinforcement learning for coverage path planning" (2024). UNF Graduate Theses and Dissertations. 1256.
https://digitalcommons.unf.edu/etd/1256