Vehicle speed detection

I’m new to working with LiDAR and looking for some guidance on how to achieve vehicle speed detection for traffic enforcement.
I’ll be using the python with the latest Ouster SDK 0.16 — any sample code, tutorials, or advice would be very helpful.

This is not a trivial problem .. I can’t give you a complete working example but I can give some leads. There are multiple paths to work on this:

  • Use 2D object detection and tracking library: to identify the cars and then get their associated range measurements this post covers this part. Then extract extract and project the points in 3D use the XYZLut tools for each tracked objects, estimate the center of each object’s pointcloud then which we are going to consider as the object position. Then by contrasting the object position in current frame with the reading from the previous frames you can estimate at what speed the object was moving at.
  • Work directly with 3D data: simply project the entire frame to 3D, perform background subtraction to remove static structures and identify moving blobs, then apply clustering algorithms like DBSCAN, once you get a cluster for every blob you’d have to match it with blobs from the previous frames and make proper associations to have tracking information, then similar to the previous path you’d have to have get the object center of each blob and compare it with object center in previous frame to estimate the velocity (note in this path you are necessarily identifying the nature of the moving object unlike the previous method).
  • Alternative to performing manual clustering you can utilize deep learning methods for object detection and tracking in 3D, one of which is PointPillars, which works directly on pointclouds and can detect and identify objects in similar to how YOLO library works but on 3D data. Once you have the objects and their centers estimated you can apply the same method to estimate velcity.

This is a high level overview of available methods .. however none of these methods are perfect and it would need a good amount of work to address all edge cases and achieve production level. To note a few:

  • The pointcloud could vary from frame to frame for the same object (vehicle), if you estimate the object center using simple naive methods the center will likely change positions from the true center which can greatly affect the accuracy of object velocity’s estimate (Use Kalman Filter to reduce the noise)
  • Partial object detection: If you have some walls obstructing the view or if you have other vehicles passing by they can also impact you the outcome of the object detector, or having the objects switch ids. In this situation, using more than one lidar (positioned apart from each other) and fusing the pointclouds before passing to the object detection or clustering can help mitigate the problem.
  • Also dealing with irregular objects: for example towed cars or trailers, …

Hope this helps!

To emphasize @Samahu point on the size of this problem, we have teams dedicated to this.
This is our product for traffic management and analysis: Ouster BlueCity: AI-Driven Lidar for Smarter Cities & Traffic Safety | Ouster

I recommend reaching out to sales@ouster.io to see if it fits your needs.