Hi Ouster Community,
I’m using an Ouster OS0 LiDAR for 3D pallet detection. My pipeline segments pallets in 2D images using a CNN, then projects the results into the 3D point cloud for classification. I tested this with Ouster’s sample data for car segmentation, using auto-exposure, and it worked well with minimal noise. However, my own PCAP files, recorded at 512 resolution, produce noisy 2D images despite auto-exposure and beam uniformity correction. This noise causes my pallet-trained CNN to fail.
Details:
- LiDAR: OS0, 512 resolution
- Preprocessing: Auto-exposure, beam uniformity correction
- Issue: Noisy 2D images in my data (not in sample data), poor pallet detection
Questions:
- Is 512 resolution too low for pallet detection? Should I try 1024 or 2048?
- What recording parameters (e.g., range, integration time) can I adjust to reduce noise?
- Any preprocessing tips for noisy 2D images?
- Could pallet reflectivity or lighting be a factor?
Sounds like a cool approach! If you can post an image of the pallet that you are feeding the CNN that could be helpful.
I would check what Chanfield you are running the CNN on. The YOLO example posted on the forum uses NEAR_IR channel because it’s outdoors during the day with plenty of ambient light.
If you are using the NEAR_IR channel on an indoor environment, the SNR might not be great because there’s no sunlight. You can try using the SIGNAL and/or REFLECTIVITY channels as alternatives that are lighting agnostic - they also work with most CNNs. Your exposure routine needs to be slightly different for each, and will require some experimentation.
For REFLECTIVITY, since it is calibrated, you can usually do something like this to skip a continuous auto exposure routine:
np.clip(destagger(scan.sensor_info, scan.field(REFLECTIVITY)).astype(np.float32)/128, 0, 1)
In some cases, I’ve seen the best performance coming from processing all Chanfields in parallel through their own CNN’s and combining the final results with a simple heuristic.
And yes, it might give better results at 1024 mode. Though an OS0-128 in 512 mode has roughly square aspect ratio pixels in angular space - like a camera - so I would expect decent results from that. If you want to try 1024 mode and don’t see good results, try duplicating every row of the image to make a 256x1024 with roughly square aspect ratio imagery and then passing it through the CNN.
1 Like