Understanding Surroundings with Point Clouds and 3D LocalizationÂ
Navigation is an important ability of an autonomous system. The need to move around without the use of external inputs and a map has led to research in 3D localization. 3D localization is necessary for the success of any autonomous system. For example, if a robot’s purpose is to clean the trash in a room, it will not be able to take any action just with captured video of the room. The robot will need to know factors like the location of the trashcan and the layout of the room to locate objects.
For this, point clouds are used for tracking objects in 3D space. A point cloud is a dataset that represents an item or a place. These points indicate the geometric coordinates of a single point on an underlying surface on the X, Y, and Z axes. In more simple terms, point clouds create points that represent a 3D shape, object, or area. Point clouds can be generated manually by taking pictures from different perspectives or automatically by taking images with a camera attached to a moving object.
Many algorithms work on these point clouds and help self-navigation systems such as robots find their way accurately in unfamiliar environments. This article will discuss what point clouds are and answer some questions about how they are helpful in autonomous systems.
What’s the Point of Point Clouds?
Autonomous systems collect data from different sensors to perform their desired action. In most cases, robots cannot perform actions with just the raw data captured from various sensors. As with the trash cleaning robot example, the robot wouldn’t be able to extract information about the room from only a raw video file. Point clouds provide more information about the area around the robotic or autonomous object, enabling them to perform the task at hand.
A potential use of this is with self-driving cars. Self-driving cars must be aware of their surroundings to accelerate, decelerate, stop, turn 5 degrees left, etc. An automobile must identify other cars, trucks, pedestrians, barriers, traffic signs, signals, lanes, and more to be aware of its surroundings.
Point Clouds and Self-Driving Vehicles
RediMinds tested out the use of point clouds to detect the surroundings of an autonomous vehicle. LIDAR data captured from the self-driving car was used to determine the location, approximate size, and orientation of the surrounding vehicles in 3D space. Lidar is a technique for determining ranges that involves aiming a laser at an object and timing the time it takes for the reflected light to return to the receiver.
Using the LIDAR data in conjunction with the cylindrically stitched visual frames from six other cameras, the model could detect other cars in 360 degrees surrounding the vehicle. Although LIDAR data was used for this experiment, this technique could be expanded to just use cameras for 3-D object recognition, tracking, and monitoring with other robotic systems.
Why Not Use 2D Localization?
Humans view the world in three dimensions. To compete with or outperform us, autonomous objects need to obtain information in the way that we do or better. Like 3D localization, 2D detection identifies a list of surrounding objects and their location in the images. However, 2D localization lacks crucial information, like how far the objects are relative to the self-driving car.
3D object detection provides answers to these questions, as it has much richer information about its surroundings. When this detected information is supplied to the decision-maker and other processed data such as traffic signs and lane detection, the trajectory of nearby objects, and more, the decision-maker acts according to that information.
Why Does This Matter?
Self-driving cars require access to a large amount of high-quality data in many categories to build a trustworthy model for predicting the surroundings and reducing accidents. If the overall goal of a self-driving vehicle is to create safer roads, then self-driving cars need to detect their surroundings similarly or better than humans do. Autonomous vehicles need to accurately identify thousands of street objects such as road signs, borders, lanes, traffic lights, automobiles, trucks, pedestrians, and corresponding travel routes.
Outside of self-driving cars, other robotic objects, like automated robots in manufacturing warehouses, need to be conscious of the world around them to improve safety conditions. This is made possible with 3D localization and point cloud. This method may also be extended to 3D tracking and monitoring. For example, monitoring consumers at a business with automated billing or tracking items in an automated process.
To learn more, check out this case study.