Understanding strategies to get Ouster LIDAR data for 2D AMCL

Hi,

while using an Ouster OS0-32 working with the ROS2 driver, I realized when looking at the 3d point cloud that apparently there is not a specific beam sweep being done in the 0° vertical angle plan, what would be the horizontal plan if the sensor is mounted vertical upright. In other words, the 3D LIDARs with an even number of channels for the vertical resolution will scan rings which are at the closest +/- (Vertical field of view)/((vertical resolution)*2), from this horizontal plan, but will not scan the horizontal plan per say (with vertical resolution being the nb of vertical channels, so nb of rings).

Is that correct? If so, that means that all Ouster LIDAR with an even number for the vertical resolution would be following this principle, apart from the OSDome, which first lower beam sweep would be planar and in the horizontal plan according to its mechanical drawings showing the FoV.

My goal was to initially have a simple 2D mapping of an environment and then tune and evaluate 2D AMCL within the ROS2 Nav2 stack for localization. Most of the time, this would be done with a 2D LIDAR only.

So with a Ouster hardware, there would be different strategies possibles from what I understand:

-from the 3D point cloud, generate a 2D laserscan and use that for 2D AMCL. With the Ouster-ROS2 driver, you can already select a single ring number to provide a laserscan, but that is when I noticed that this scan was not planar (as it was picking up objects below the horizontal plan). So you would have to generate horizontal planar 2D laser scan from the 3D point cloud (similar to GitHub - ros-perception/pointcloud_to_laserscan: Converts a 3D Point Cloud into a 2D laser scan. ).

-use 3D point cloud and 3D AMCL, as shown in GitHub - catec/amcl3d: Adaptive Monte Carlo Localization (AMCL) in 3D. which by the way use a Ouster LIDAR. However in my case, the platform motion is only horizontal/2D, so 3D processing of 3D AMCL similar to a flying drone may be using much more resources than needed. Also I am unsure how this 3D AMCL performs with moving elements in the environment.

-use 3D point cloud and 3D voxel approach for creating obstacles costmap representation, but unsure if there is any technics using that for localization.

So I was wondering what is the common approach among the above for localization using 2D AMCL like technics from hardware with 3D point cloud?

Thank you for your help,

Nicolas

Hi @NAU

The ouster-ros driver already produces a LaserScan message which you can feed into the amcl package. The way it currently setup it simply extracts a single beam of choice (through scan_ring parameter) and publishes it as the LaserScan message. Since you mentioned the you are using a 32 Lidar basically you want to use the 15 or 16 beam number which would be the closest to the center on most sensor models (except the OSDome, and high or low beam type sensors models). That being said even it the beam angle wouldn’t actually map to an exact even plane as it is the case with an actual 2d laser scanner but something close . This is of course lesser of a problem if you have higher beam count i.e. the 128 beam but also wouldn’t give you a laser scan that is completely parallel to the ground (especially the further the distance you move away from the sensor).

Another thing is you mentioned this doesn’t take into account if the sensor is not parallel to the ground, the code doesn’t handle this case. In that case, you want to use the re-oriented pointcloud and trace or select the points that fall onto the 2d plane.

Another thing that is on our roadmap when to generate the 2d laser scan message is to combine the measurements from multiple beams selecting the beam measurement that yields the shortest distance.

A 3D localization solution would benefit from the additional data and is not necessarily meant for localizing a flying drone only. But it definitely requires more processing to converge.

Hi @Samahu,

thank you for your reply.

Yes, the higher the beam resolution, the closer the “mid rings” will be to a horizontal plan, for a given vertical FoV. One small remark related to that, I think there might be a factor 2 on the vertical resolution as stated on the comparison page: Overview of our OS sensors | Ouster. For instance, when running the OS0 128, I obtain half of the value displayed on the website in the vertical resolution up to.

Using this table, you can see that the OS0 at max range shows an altitude difference of 0.2m (delta z) on the scan at max range from the horizontal plan, that may already have an impact on a 2D AMCL localization accuracy depending how much variations your environment vertical boundaries have around the scanned plan.

The table also shows that for their respective max range, the OS0 might actually be better, as in the closest ring is the closest to the horizontal plan along the vertical z distance) for 2D AMCL.

For the laserscan roadmap, great to have an item on the roadmap such as the measurement of several beams being combine to select the beam measurement with shortest distance into a 2D laser scan. I may suggest a variation, shortest distance in absolute being relatively easy to get, but what may be of high interest is also the shortest distance once projected into a plan (such as the base horizontal x-y) plan or a plan defined by an orientation, as if the sensor was mounted with some tilt and roll on a ground platform.

I am just wondering if that would be worth on the hardware roadmap to integrate an additional beam sweep on the horizontal or exact mid plan as well (within the vertical sampling accuracy of course), as this would unlock a 2D AMCL capability, with less computation overhead than 3D AMCL for these sensors (3D point cloud still being usable for 3D costmap, obstacle avoidance, segmentation, classification,…). That may be of good interest when sensors are mounted parallel to the ground (ground mobile platform navigating indoor).

Thank you,

Nicolas

Hi @NAU,

I’ve run a number of tests to determine the best approach to convert OS lidar data streams into single beam lidars to then run 2D lidar slam and localization algorithms. Ultimately, what worked best was the simple approach that @Samahu suggested above - choose the “almost-horizontal” beam based on elevation angle and project it into the 2D plane as a 2D point cloud.

While I understand your request for a hardware beam that projects at 0° in azimuth, in practice I think you’ll be surprised by the reliability of the results of choosing the almost-horizontal beam. As an aside, I have seen many examples where a customer is using a 2D lidar scanner but doesn’t have the metrology capability to verify it’s parallel with the ground. In essence they are running an almost-horizontal beam system without knowing it. Most SLAM algorithms are resilient to these small elevation tilts.

If you have a pcap, bag, or OSF file handy you can test this approach on your data easily with the command line tool that gets installed with pip install ouster-sdk:

The following terminal command will zero all data in each scan but the 15th scan line before running slam and visualizing:
ouster-cli source FILE filter U :15 filter U 16: slam viz --accum-num 200

Here’s what that looks like on an indoor dataset from top down:

1 Like