In this event I showed a few experiments with machine learning.
The example application was classifying traffic signs. The data set was taken from “The German Traffic Sign Benchmark” (GTSRB): it contains 51839 images of 43 different traffic signs from real-world captures (39209 for training and 12630 for the benchmark).
I used GNU Octave to preprocess the images (make the size uniform, stretch the dynamic range, convert to grey scale, store everything in a simple .mat-file).
In the first experiment I took a simple fully-connected network with one hidden layer with 1000 nodes. I implemented the net in GNU Octave using the math from Tariq Rashid’s “Make Your Own Neural Network”. With this net I could get a ~ 85% correct detection rate in the benchmark.
In the next experiment I used dlib, which is a machine learning toolkit for C++. The same network architecture implemented using dlib gave a correct detection rate of ~ 89.5% with dlib’s more sophisticated training algorithm.
In the last experiment I kept using dlib, but changed the network architecture to a “deep” network: I started from a LeNet architecture and added a few more layers, mixing fully-connected, convolutional and max-pooling layers. To train this network I used a VM with a Nvidia Tesla V100, which dlib can use for CUDA/CuDNN. This way the correct detection rate in the benchmark increased to ~ 95%. Not bad for just a few lines of C++ code and 30 seconds of training!
To put this into perspective, when the first competition launched in 2011 a score of 95% would score about halfway in the field of participants, with the top place reaching 98.98% and beating human performance at 98.81%.
Download the source code:2020-12-03_noi-dev-thu.zip