Autonomous Mapping of Dark & Unknown Environment

Indoor Navigation and Mapping
OBJECTIVE
The aim of the product is to be able to navigate and map an indoor environment with no light, rendering RGB-D cameras ineffective. It proposes an alternative to SLAM algorithms present with low computation and high speed. The accuracy desired from the map can also be altered as per the need basis which further aids fast mapping. Deviations in the map compared to ground truth do not exceed 30 centieters in a 1000 meter-squared environment.
EXPERIMENTAL SET-UP
-
The experimental environment is set up with 4 anchors used for trilateration and UWB technology. Each is positioned at different heights and corners in the room, positions being defined with respect to a global frame of reference set initially. POZYX beacon is used to get the global position data of the robot at any point in time.
-
TurtleBot is used for robot locomotion and as the base for assembling all the components.
-
A 9-DOF IMU is deployed to get the orientation of the robot in the earth frame of reference. The values from the IMU - the accelerometer, gyroscope and the magnetometer, are used to find the roll, pitch and yaw.
-
Hokuyo LIDAR (URG-04LX-UG01) was used to get the point cloud of the distances of the obstacles.
All the computations were carried out using Python 3 and ROS Kinect on UBUNTU 16.04 machine.
PROCEDURE
-
The POZYX tag receives the position and orientation data of the robot. This is cleaned and further processed to localise the robot in the indoor environment (w.r.t. the global frame set earlier).
-
The LIDAR provides local point cloud data which is used to map the immediate environment in the Feild of View (FOV).
-
IMU Data is reinforced used for orientation considerations.
-
ROS communicates among the nodes created separately for each of the sensors and subscribes data into one file that is used for mapping.
-
The pixel (accuracy) size is then set as per user requirements. Upon encountering an obstacle, the pixel value of that region increases and with every reinforcement (up to 10), the pixel becomes darker. Darker cells show a higher probability of the robot encountering an obstacle at that point.
CONCLUSION
Frontier Exploration algorithm was the inspiration for autonomous motion planning of the robot. The mapping was faster and the data storage was computationally less intensive compared to other algorithms available at these costs.