Autonomous Robot with Stereo Vision

Another old project, this is related to my PhD thesis. It was the first attempt to test the entire system in real environment. A paper is IEEE was published in 2009

My PhD thesis had 3 main sections:

  1. Developing an IMU hardware and firmware
  2. Develop a stereo vision system to detect and avoid obstacles
  3. Conclusions

First section was especially annoying because back in 2006 there were no triaxial MEMS gyroscopes. Only thing I could got were the Z axe gyroscope from ADI and I had to make strange PCB to have all axes in place.

The main problem I had to deal with was the gyroscope “angular random walk” and offset instability caused by technology, plus it can get worst with temperature variations and so. To improve the situation an algorithm to periodically calibrate gyroscope offset value was developed. this was possible by using data from the rest of the sensors accelerometer and electronic compass

Fig 1. IMU ~2007

All angular computation were done using Euler angles. another feature of the IMU was the determination of the correct magnetic north pole even if the robot was not perfectly horizontal. For this the MCU had to compute the projection of the Earth magnetic vector on a virtual perfectly horizontal plane.

fig 2 Determination of the magnetic north pole

Stereo vision part was by far the most interesting and time consuming. One challenge I faced was the acquisition of image from two webcams in Win XP. some solution involving special ActiveX to be installed in “safe mode” worked but no very reliably. the final solution came from a product used in surveillance: an USB to IP converter able to acquire image from multiple webcams:

Fig 1 Stereo vision hardware

I took advantage of the LabVIEW IMAQ 6.1 functions to process the images. Back them ~2009 there were no function related to stereo images so for this everything was created from scratch

fig 2 LabVIEw GUI for stereo image processing

The steps that need to be followed in digital image processing are:

  • Acquisition is the process to transform de optic image in a digital image in the computer memory;
  • Preprocessing is the process of image parameters improving: calibration, noise reduction, contrast adjustment, detail enhancement, etc;
  • Segmentation is the process that is braking the image in smaller images;
  • Description is the process that is computing different parameters for object identification;
  • Recognition is the process of identifying the objects;
  • Interpretation is the process that is giving sense to the entire picture.

Since the two webcams were loosely mounted one next to each other, the axes of the two lenses were not parallel. to fix this an automatic calibration algorithm was created to automatically compute the rotation and translation of one image relative to the other to compensate for the misalignment

fig 3: Image auto calibration program

Am image hold a huge amount of information, but a large portion is redundant so it’s important to correctly determine the are of interest that are relevant in obstacle detection. so I focused on vertical edges , not all edges, because the webcam were mounted horizontally

fig 4: vertical edge detection

the final result was displayed on a 2D map were the obstacle positions were indicated. the color of the indication offer a indication regarding obstacle height.

Fig 5 Interpretation

Now in 2019 all this look childish, but most of this work was done between 2007-2009 were very few information about navigation based on stereo vision was available and OpenCV was not available yet.

Design a site like this with WordPress.com
Get started