DNRacing v0.5 — OpenCV and Data Collection for Imitation Learning

Written by hackernoon-archives | Published 2017/10/27
Tech Story Tags: opencv | photography | data-collection | imitation-learning | dnracing-v05

TLDRvia the TL;DR App

Project update on fisheye lens distortion and data collection for behavioral cloning.

(Top Left: 4mm Focal Length, Center: Raw fish eye lens image, Top Right: Undistorted fish eye lens image)

For data collection I used 1.8 mm focal length fish eye lenses and yellow banquet table sheet spreads. I chose fish eye lens despite the fact that I had to correct them for distortion because they provided me with a 185 degree field of view. The top left corner image shows an image of a 4.0 mm focal length wide view lens. The field of view is quite narrow, and doesn’t provide all the lane cues for an abrupt turn such as a hairpin. However, we can see in the center image that with a fish eye lens, the image can clearly show the lanes for even a 90 degree turn. The image was processed to remove out all the distortion and color apart from the track as much as possible. The cameras that I received have a slightly yellow hue which made using yellow table sheets a huge mistake as even the HSV color format cannot extract just the lane colors.

I tested the AI algorithm for the vehicles with the new fisheye/OpenCV featured images at 144 x 144 pixel resolution, but I still do not get convergence. I came to the conclusion that I need to create my own NN code instead of hijacking someone else’s code to fit my needs. I also decided to ditch the idea of manually training the vehicles with a controller. Too labor intensive. Finally, I decided to sell the bulky ZOTAC nettops over a Jetson TX2. Reasons why I decided to switch back to the Jetson developer board is listed below.

  1. GPU memory: TX2s share GPU memory with the CPU memory, and assuming I use 2GB for the CPU, I still have 6GB left for GPU memory. Thats twice the memory I can use compared to the ZOTAC PCs. This means bigger images for the NNs. I can also talk about speed losses in the ZOTAC PC’s GPU PCIE lanes, but I won’t go there.
  2. Power consumption: The ZOTAC PCs use 120W at peak power consumption compared to the TX2’s 15W peak power consumption. I could even sell my 300 USD battery packs at this point for laptop power banks.
  3. Size: Although the TX2 Dev board is quite similar in size to the ZOTACs, I could get a 3rd party carrier and shrink the size to a sixth of the current size. Considering the PC is the biggest component on the vehicles right now, I could use the downsizing.

I asked a couple of my friends to help me with training the two vehicles to collect overtaking data for the neural net. As you can see in the video, it consisted of one vehicle traveling slightly slower than the other vehicle trying to overtake. It was difficult to properly drive the vehicles for this type of data collecting and I realized that I need to implement some sort of racing simulator in Gazebo/Unity which could collect training data virtually. Besides, it was a pain having to recharge the batteries every two hours. Below is my professor on my right trying to drive the car.

(Left was my MSc Supervisor, Dr.Longo)

Therefore, my plan is to use my ZED stereo camera to collect a spatial map of the environment which I will export to ROS as a virtual simulator. I plan to use my Occipital IR depth sensors to scan an .STL model of both vehicles. I will have to decide how far I want to model the vehicle dynamics of the cars for the simulator. I am probably going to attempt a steady state 7-DOF model combined with a simplified Pajecka magic tire model. I am thinking of implementing optical wheel encoders to measure the wheel speed for braking and throttle positions.

As a side note, I looked at a paper that actually used sensor fusion on anemometers and IMUs for dead reckoning. I also had this idea in mind after getting the inspiration from pitot tubes in F1. It’s an interesting concept, but I will stick with IMUs and wheel encoders for now.

Below is another video of one of my friends who helped drive the vehicles when the track was still twisty. He did his MSc thesis topic on race line optimization for go kart circuits and now he’s in Scuderia Toro Rosso’s F1 team as an engineer.

<a href="https://medium.com/media/3503d56f827cede3bf6db9f3576a9cb1/href">https://medium.com/media/3503d56f827cede3bf6db9f3576a9cb1/href</a>


Published by HackerNoon on 2017/10/27