I’ve been working on a project to build a Laser Range Finder using a Raspberry Pi, an Arduino and OpenCV using a webcam. I hope that eventually this project may be used on a mobile robot using the algorithms taught on Udacity CS373 specially SLAM (Synchronous Location and Mapping).
The First Prototype
This first protoype is more a proof of concept than a usable device. Anyway, it’s working pretty well except for it being quite slow.
- Raspberry Pi Model B running:
- Archlinux ARM with a modified kernel to support the Arduino and the Webcam
- OpenCV 2.4.1
- Python 2.7
- The LRF software
- Arduino UNO connected via USB to the Raspberry Pi. Runs a controller that receives a message to turn on and off the laser. I hope it will also control some servos later.
- A Logitech c270 webcam, disassembled, so it can be installed on the casing
- Sparkfun TTL Controlled Laser Module
- A targus mini USB hub
- My Powered USB cable to provide the extra current that the Raspberry Pi can’t provide to the USB devices
- A couple of USB power sources, one for the RPi and the other for the USB devices
- A lousy acrylic casing, the first thing I’ve done with acrylic
Also check the video for an overview on it’s parts and how it works.
This prototype is very slow (one measurement takes about 10 seconds) but I’m optimistic that it may become more functional on a couple of iterations, specially with the upcoming Raspberry Pi Foundation CSI camera. The device is pretty accurate and precise on short distances but, as expected, both decrease at large distances. I would estimate that from a distance up to 35cms it’s very accurate, from 35 to about 60cms has pretty good and up to 2m it may be good enough for a small robot.Later I’ll post more details on the measured precision and accuracy and some tricks to enhance them.
As you can see on the video, it has a simple web interface to trigger the measurement process. It can also be done command line by SSHing to the Raspberry Pi. I’ll also post how OpenCV detects the laser in the image, and the next steps I’ll take to improve. For now you can get most of the working code from Github. The details of the mathematical model appears below.
All comments are welcome here (comments section at the bottom) or via Twitter.
The Model
This diagram shows the basic idea for the project. The laser is shot to a target at a known angle and the image is captured on the webcam. The angle at which the laser appears on the image corresponds to the incidence angle of the laser at the target, and thus, to the distance to the target.
If the target is a little farther, so that the laser crosses the focus line of the camera, the model is a bit different:
Here we are considering that both the camera-to-laser angle (β) and the distance from the camera to the laser (L) are fixed. We also know the focal distance (f) and the horizontal resolution (CAMERA_WIDTH) that are parameters of the camera. With OpenCV we can process the image and calculate the horizontal distance (vc) from the camera’s Y axis to the point where the laser appears on the image. Given those values we can use simple trigonometry to calculate the angle at which the laser appears on the image (δ) and the distance from the camera to the target (Dc). Note that we are looking Dc and not D which is the perpendicular distance from the camera to the target. By the way, for the purposes of this model the webcam is considered a pinhole camera. Later on we will correct the physical camera to adjust the model.
vx = CAMERA_WIDTH – vc
δ = atan( f / vx )
λ = π – β – δ
Dc = L * sin( β / λ )
I’ll post later details on the implementation.