Opportunistic Multi-robot Environmental Sampling via Decentralized Markov Decision Processes

Document Type

Article

Publication Date

1-1-2022

Abstract

We study the problem of information sampling with a group of mobile robots from an unknown environment. Each robot is given a unique region in the environment for the sampling task. The objective of the robots is to visit a subset of locations in the environment such that the collected information is maximized, and consequently, the underlying information model matches as close to reality as possible. The robots have limited communication ranges, and therefore can only communicate when nearby one another. The robots operate in a stochastic environment and their control uncertainty is handled using factored Decentralized Markov Decision Processes (Dec-MDP). When two or more robots communicate, they share their past noisy observations and use a Gaussian mixture model to update their local information models. This in turn helps them to obtain a better Dec-MDP policy. Simulation results show that our proposed strategy is able to predict the information model closer to the ground truth version than compared to other algorithms. Furthermore, the reduction in the overall uncertainty is more than comparable algorithms.

Publication Title

Springer Proceedings in Advanced Robotics

Volume

22 SPAR

First Page

163

Last Page

175

Digital Object Identifier (DOI)

10.1007/978-3-030-92790-5_13

ISSN

25111256

E-ISSN

25111264

ISBN

9783030927899

Share

COinS