Simple Models for Deep Learning Testing

Watch on Github

I’ve been learning a bit about deep neural networks on Udacity’s Deep Learning course (which is mostly a Tensorflow tutorial). The course uses notMNIST as a basic example to show how to train simple deep neural networks and let you achieve with relative ease an impressive 96% precision.

In my way to the top of Mount Stupid, I wanted to get a better sense of what the NN is doing, so I took the examples of the first two chapters and modified it to test an idea: create some simple models, generate some training and test data sets from them, and see how the deep network performs and how different parameters affect the result.

The models I’m using are two dimension (x,y) real values from [-1,1] that can be classified in two classes, either 0 or 1, inside or outside, true or false. The classification is a function that makes increasingly hard to classify the samples:

  • positivos: A trivial classification – if x>0 class=1 else class=0
  • linear: if y>x class=1 else class=0
  • circle: if (x,y) is inside a circle of radius=r then class=1 else class=0
  • ring: if (x,y) is inside a circle of radius=r but outside a concentric circle or radius=r/2 then class=1 else class=0
  • cos: if (x,y) is above a cosine with frequency=r then class=1 else class=0
  • polar: if (x,y) is below a cosine of frequency r in polar coordinates (inside a “rose of n-petals”) then class = 1 else class = 0

The plots shown above are generated from the training sets by setting positive samples as dark blue and negative samples as light blue.

Two neural networks are defined on the code, with a single hidden layer and with two hidden layers. Several parameters may be adjusted for each run:

  • model function to use
  • number or nodes of each layer,
  • batch size and number of steps for the stochastic gradient descent of the training phase,
  • beta for regularisation,
  • standard deviation for the initialisation of the variables,
  • and learning rate decay for the second NN

Other parameters are treated as globals such as the size of the training, validation and test sets.  When the NN function is called the network is trained and tested, and the predictions are added to the plot marking them on gray if the prediction was correct and on red if the prediction as incorrect in order to see where the classifier is missing it.

You can call the NN function varying the parameters to test what happens. It’s also included some iterative code to test many variants and generate a *lot* of plots. Check the code and comment what you find.

Here are some interesting results I’ve got:

Trivial model with a single hidden node and few training data

As expected, the model is too simple to really learn anything: it’s basically classifying everything as negative and thus gets a accuracy of 49.9%


 

Trivial model with a single hidden node with more training data

By increasing the size of the batches and steps for the stochastic gradient descent even a single node on the hidden layer yields good results: 99.6% accuracy


 

Linear model with a single hidden node and few training data

Of course few training data does anything good for a slightly more complex model: 50.1% accuracy.


 

Linear model with a single hidden node with more training data

Again the classifier performs way better: 99.9%


 

Linear model with a 100 hidden nodes with few training data

Can hidden nodes compensate for few training samples? 100 nodes, batch of 100 and 10 steps does certainly better at 93.9% accuracy and an interesting plot with linear classification on a wrong (but close) slope.


 

Linear model with a 1000 hidden nodes with few training data

What about 1000 hidden nodes and same parameters as before? It certainly gets you closer but not still on it: 98% accuracy. Watching the slope, one can think it may be overfitting.


 

Linear model with lots of hidden nodes and few training data

First tried 10,000 hidden nodes (got 98.9%) and then 100,000 nodes (shown below) and accuracy jumped back to 98.5%, clearly overfitting. For this case, a wider network behaves better but just to a certain point.


 

Circle with a previously good classifier

Training the classifier that previously behaved well with the circular model shows clearly it needs more data or nodes.


 

Circle with a single node

Turns out that it seems a single node can’t capture the complexity of the model. Varying the training parameters looks that the NN is always trying to adjust a linear classifier.


 

Circle with a many nodes and different training sizes

Two nodes

With two nodes the model looks like fitting two linear classifiers. Even varying the training parameters results are similar, except in some cases with more training data that looks like the last examples of single node (maybe overfitting?)

Three and more nodes

Using three nodes things become interesting. For a batch size of 100 and 1000 steps the NN gives a pretty good approximation to the circle and goes beyond adjusting 3 linear classifiers. Something similar happens varying the training parameters.


Increasing the number of nodes increases the precision and the visual adjustment to the circle. Check for 10 (97.8%), 100 (99%) and seems to stop at some point 1000 (99%)
 

Finally increasing the training parameters on the best classifier gives us a nice 99.5% accuracy, but not sure if it’s overfitting


 

The Ring

For the ring I started testing one of the best performers on the circle: 100 nodes, batch size of 100 and 1000 steps. Somehow expected, it tried to adjust to a single circle and missed the inner one.


Increasing the training parameters gave an interesting and unexpected result:


I also found that running many times with the same parameters may yield to different results. Both cases could be explained by the random initialisation and the stochastic gradient descent as the last example looks like a local minimum. Check below another interesting result using the exact same parameters yielding a pretty accurate classifier (92.7%)


Increasing the number of nodes and the training parameters improves accuracy and makes more likely to get a classifier with a pretty good level of accuracy (96.9%). Shown below, 1000 nodes, batch size of 1000 and 10000 steps.

Cosine

With few nodes and different training parameters we see the classifier struggling to fit.

Increasing the parameters gives a model that doesn’t increase very much the accuracy (around 87%). From the ring and the cosine it looks like there are limits to the complexity a single hidden layer can handle.

Cosine in polar coordinates

This example came as a way to try a harder problem for a classifier. As expected, the most basic versions of the parameters yield bad results. Just keep in mind that a trivial classifier that labels everything as negative would have about 80% accuracy.


We can see that a NN with 10 nodes tries to adjust a central area as positive. Results are bad but we can see what the NN is trying to do. Increasing to 100 nodes give very similar results.


With 1000 nodes the classifier improves accuracy but misses big on the center of the model. But going up to 10,000 nodes does not improve much more. This makes me think that, as with the case of the rectangular cosine, there is a limit on what a single layer neural network may model for the classifier, and that more complex distributions may need a different kind of network.


A deeper neural network

Next I was interested in what happens with deeper networks, expanding the exercises from the Udacity course to a 3 layer NN and testing with the same models.
First finding was that for the Positivos example the NN needed to be of at least 10 nodes on each layer to get good results. Same classifier also worked well for the linear model, but not with the circle.



The circle needed quite an amount of nodes in some of the layers to get over 99% accuracy. The simplest model I could find with a 99.4% was (10,1000,1000) nodes. Other models with a similar count of nodes gave similar results. Models with less nodes failed completely or gave increasing accuracies.



Interestingly, the ring also converged faster with (100,100,100) nodes with a 94% accuracy up to a nice 97.9% accuracy for a (1000,1000,1000) configuration.



The cosine also improved with the deeper network achieving a 94.3% accuracy with a (1000,1000,100) network. What is also interesting is that models learned by the deep network approached better than the wide network almost from the beginning. If you want to check, run the iterative version you can find commented on the code.

The Polar Cosine and Deep Network

The most interesting case is the polar cosine on the deep network as it looks like a really hard classification problem. Base accuracy is about 80.3% for the all negative classification. As we grow the number of nodes in the different layers interesting patterns appear as you can see in the examples below.

The last example does a pretty good job classifying this complex model with a 98.1% accuracy with the three layers and 1000-1000-1000 nodes.


I find very interesting seeing how the wide neural network seems to have a limit on the complexity of the classifier it can learn. And the deep network looks like being able to capture that complexity. Check the Jupyter notebook at Github. You will need Tensorflow installed to run it.

Read More

Detecting Whales: Kaggle Right Whale Recognition Challenge

Fork on Github
Kaggle’s NOAA Right Whale Recognition Challenge aims to develop an algorithm to identify individuals of Right Whales, which are critically endangered. It is a great chance to study machine learning and digital image processing although looks to me as a really hard challenge. Anyway I’ve developed this method to detect the whale in the photograph and I’m releasing it in a hope that it may help others.

It takes advantage of the fact that most pictures are pretty plain, with almost all of the area covered by water, and have a smaller region of interest which corresponds to the whale, so the histogram for most of the image will be similar except on the region of interest. The algorithm looks recursively to subimages that have an HSV histogram not similar to the original image’s histogram, marking those regions in white and else on black. Then searches for the biggest continuous region using contours and places a bounding box around it, assuming it’s the whale. The image is called “extract” and is saved along the black & white mask.

Check the code in Github. Uses Python 2.7 and OpenCV 3.0.

Original Image:
whale

Whale found:
whale

Areas found mask:
whale

ROI Mask:
whale

ROI Extract:
whale

Read More

Stacked and Grouped Barplots in R

Fork on github
This is a modified version of the original barplot in the R core that lets you add more series as stacked and grouped by adding trailing space with space and a new space-before parameters.

barplot.sg(m3,space.before=0,space=2.5, col=pal1, ylim=c(0,1.2*max(m1[2,])), border=NA)
barplot.sg(m2,space.before=1,space=1.5, col=pal2 ,xaxt="n", border=NA, add=T)
barplot.sg(m1,space.before=2,space=0.5, col=pal3,xaxt="n", border=NA, add=T)

stacked and grouped barplot

Read More

Arduino sample: Heartbeat

I’ve written a small sketch for Arduino to make a led blink with a function that resembles a human heartbeat. 

On [1] Stevens and Lakin describe a detailed mathematical analysis of the signal of the cardiac pulse. 

I’ve taken one of their equations that close to the cardiac pulse and generated a lookup table for the luminosity value.

  f(x)=(sin(x)^13) * cos(x-PI/10)

  check the function graphed on Google 

Check the code at Github.

[1] Stevens, Scott; Lakin, William. A Differentiable, Periodic Function for Pulsatile Cardiac 

    Output Based on Heart Rate and Stroke Volume

    http://math.bd.psu.edu/faculty/stevens/Publications/paper3.pdf

 

Read More

DIY Raspberry Pi Cobbler (for the GPIO)

The Adafruit Raspberry Pi Cobbler is a nice breakout for the Raspberry Pi GPIO designed to connect to a breadboard. While not expensive (US$7.95) shipping simple electronic devices to Mexico can double it’s price due shipping and it may take several weeks to be delivered. So I decided to make one from components I could get easily here.

Components:

  • 25 cms of 26 pin ribbon cable
  • Two female 26 headers for ribbon cable (press connectors)
  • One generic protoboard
  • One strip of break away headers with 26 pins
  • One strip of break away headers with 26 double pins

First, build the ribbon cable following Gert’s instructions.

Now, take a look at the protoboard I got from Steren. Looks like a solderless breadboard. Notice that each row the left side of the vertical tracks has two single point pads while the right side has three. I used the side with two pads, and inserted the row of double pins on them. The ribbon cable will be connected to this headers.

The protoboard. Notice the 2 single pads side and the 3 single pads side on each row

The double pin header row. It’s position corresponds to the two single pads

The single pin headers will connect the protoboard with the breadboard. As this terminals have to be longer and must be soldered on the copper side of the breadboard, I inserted them upside down and then pushed them from the top just to the plastic to make them as long as they can on the bottom side.

The single pin headers inserted upside down. Check the three leftmost pins that have been pushed down further.

Given the odd layout of the protoboard, I found easier to put the right side headers one hole farther and put jumpers between the tracks. Check how the solder bridges the up and down facing pins and tracks.


The jumpers on the up side and the soldering on the down side.

Finally, cut it using a Dremel. I should have done this before the soldering!.

Side view.

Read More

A Laser Range Finder using RapsberryPi, Arduino and OpenCV

I’ve been working on a project to build a Laser Range Finder using a Raspberry Pi, an Arduino and OpenCV using a webcam. I hope that eventually this project may be used on a mobile robot using the algorithms taught on Udacity CS373 specially SLAM (Synchronous Location and Mapping).

The First Prototype

This first protoype is more a proof of concept than a usable device. Anyway, it’s working pretty well except for it being quite slow.

  • Raspberry Pi Model B running:
    • Archlinux ARM with a modified kernel to support the Arduino and the Webcam
    • OpenCV 2.4.1
    • Python 2.7
    • The LRF software
  • Arduino UNO connected via USB to the Raspberry Pi. Runs a controller that receives a message to turn on and off the laser. I hope it will also control some servos later.
  • A Logitech c270 webcam, disassembled, so it can be installed on the casing
  • Sparkfun TTL Controlled Laser Module
  • A targus mini USB hub
  • My Powered USB cable to provide the extra current that the Raspberry Pi can’t provide to the USB devices
  • A couple of USB power sources, one for the RPi and the other for the USB devices
  • A lousy acrylic casing, the first thing I’ve done with acrylic

Also check the video for an overview on it’s parts and how it works.

This prototype is very slow (one measurement takes about 10 seconds) but I’m optimistic that it may become more functional on a couple of iterations, specially with the upcoming Raspberry Pi Foundation CSI camera. The device is pretty accurate and precise on short distances but, as expected, both decrease at large distances. I would estimate that from a distance up to 35cms it’s very accurate, from 35 to about 60cms has pretty good and up to 2m it may be good enough for a small robot.Later I’ll post more details on the measured precision and accuracy and some tricks to enhance them.

As you can see on the video, it has a simple web interface to trigger the measurement process. It can also be done command line by SSHing to the Raspberry Pi. I’ll also post how OpenCV detects the laser in the image, and the next steps I’ll take to improve. For now you can get most of the working code from Github. The details of the mathematical model appears below.

All comments are welcome here (comments section at the bottom) or via Twitter.

The Model

This diagram shows the basic idea for the project. The laser is shot to a target at a known angle and the image is captured on the webcam. The angle at which the laser appears on the image corresponds to the incidence angle of the laser at the target, and thus, to the distance to the target.

If the target is a little farther, so that the laser crosses the focus line of the camera, the model is a bit different:

Here we are considering that both the camera-to-laser angle (β) and the distance from the camera to the laser (L) are fixed. We also know the focal distance (f) and the horizontal resolution (CAMERA_WIDTH) that are parameters of the camera. With OpenCV we can process the image and calculate the horizontal distance (vc) from the camera’s Y axis to the point where the laser appears on the image. Given those values we can use simple trigonometry to calculate the angle at which the laser appears on the image (δ) and the distance from the camera to the target (Dc). Note that we are looking Dc and not D which is the perpendicular distance from the camera to the target. By the way, for the purposes of this model the webcam is considered a pinhole camera. Later on we will correct the physical camera to adjust the model.

vx = CAMERA_WIDTH – vc
δ = atan( f / vx )
λ = π – β – δ
Dc = L * sin( β / λ )

I’ll post later details on the implementation.

Read More

Visualización: Movilidad Laboral en México

Seguimos analizando el comportamiento del los candidatos que buscan empleo en OCCMundial. En esta ocasión recreamos una visualización que hicimos hace mas de un año sobre las “rutas” que se generan cuando la gente solicita empleo fuera de la zona donde reside. Para ello obtuvimos un conjunto de datos de 48,801 solicitudes de empleo anonimizadas, marcadas con las direcciones de origen (la del candidato) y destino (de la vacante según lo capturado por el reclutador). Para obtener las coordenadas geográficas de ambos puntos usamos el API de Google Maps al cual le enviamos las direcciones lo más limpias posible y obtuvimos la coordenadas más próximas que Google pudo encontrar. Estos datos los graficamos en un mapa en forma de curvas de llegada/salida en Processing usando el método para mapas georeferenciados que describí hace algún tiempo usando mapa blanco con líneas negras de este otro post.

En la visualización se pueden ver las principales ciudades de México ligadas mediante curvas. Cada curva tiene un origen, es decir, un punto de dónde un candidato está solicitando un empleo, y un destino, el lugar donde se está ofreciendo la vacante. En el punto de origen la linea tiene una mayor curvatura y en el destino llega casi recta. Por ejemplo, en la siguiente imagen se muestra la zona de Puerto Vallarta (derecha) y Guadalajara (derecha) donde se puede ver dos cosas: que a Guadalajara llega y se va mucho mas gente que en Puerto Vallarta, pero que al mismo tiempo mucho mas gente quiere ir de Guadalajara a Puerto Vallarta que a la inversa.

Desarrollamos también una visualización inteactiva que permite observar mejor las rutas de llegada y salida. Al colocar el mouse en alguna ciudad se resaltan las rutas de salida en azul y las de llegada en verde. El nombre y las cantidades se muestran en la esquina inferior izquierda. La visualización está hecha en Processing.js y requiere un browser con soporte de HTML 5. El proceso es un poco lento ya que cada vez que se selecciona una ciudad se recalculan las 48000 rutas para mostrar solo las relevantes. Funciona en iPad pero requiere un poco de paciencia.

Hay algunas consideraciones necesarias de tomar en cuenta al usar esta herramienta:

  • En muchas ocasiones no se cuenta con direcciones exactas, por lo que se aproximan lo más posible.
  • Cuando solo se tiene información a nivel estado, se las rutas se dirigen a un punto central en ese estado. Por ejemplo, en Baja California ese punto se encuentra entre Mexicali y Tijuana, un poco al sur.
  • Hay errores en algunos nombres de localidades, si encuentras alguno, por favor házmelo saber.
  • Los nombres de ciudades no tienen acentos.

 

El Diseño Gráfico es obra de @marco_aom, ¡muchas gracias!

¡Todos los comentarios son bienvenidos, por Twitter o en este blog!

Images licensed under Creative Commons Attribution 3.0 Unported. These files are licensed under the Creative Commons Attribution 3.0 Unported license: You are free: to share, to copy, distribute and transmit the work to remix or to adapt the work Under the following conditions: attribution: You must attribute the work in the manner specified by the author or licensor (but not in any way that suggests that they endorse you or your use of the work). Code is OpenSource under MIT License.

Read More

OpenCV on the Raspberry Pi with Arch Linux ARM

These are my notes on how I got OpenCV running on the Raspberry Pi today with a webcam. On this post you can find the Debian version that I did earlier.

  • Install Arch Linux ARM from image, use this guide.
  • Expand linux parition, also detailed on the same guide.
  • Configure copying arm224_start.elf to start.elf to get more memory for the apps
  • Configure networking: edit /etc/rc.conf and /etc/resolv.conf. Check this topic
  • Modify pacman configuration /etc/pacman.conf to use curl to download packages for my slow connection by uncommenting the line:
     XferCommand = /usr/bin/curl -C - -f %u > %o
  • I tried several times to update pacman and system using
    pacman -Syu

    but some errors about udev and libusb were found, and I finally gave up with this step. At last, everything worked except lxde which I don’t need, so I’ll check this back some other time.

  • Install lxde. I’m not sure if some libraries installed by this are useful to OpenCV.
    pacman -S lxde xorg-xinit xf86-video-fbdev
    
  • lxde didn’t worked: every time I tried to xinit, it throwed a error about libudev.so.1 not being found.
  • Install python2 (which was already installed but was updated), numpy, opencv and samples:
    pacman -S python2 python2-numpy opencv opencv-samples
    
  • Finally I run a simple test I use to open the webcam stream, take a frame and save it. It didn’t worked immediatly since I found that a Dell multimedia keyboard I had attached to a USB hub with my DIY USB powered cable with the webcam had some issues. But after solving it, the camera works and saves the image. The sample is this:
    import cv2.cv as cv
    import time
    
    #cv.NamedWindow("camera", 1)
    
    #capture = cv.CaptureFromCAM(-1)
    capture = cv.CreateCameraCapture(1)
    #cv.SetCaptureProperty(capture,cv.CV_CAP_PROP_FPS, 3)
    cv.SetCaptureProperty(capture,cv.CV_CAP_PROP_FRAME_WIDTH, 1280)
    cv.SetCaptureProperty(capture,cv.CV_CAP_PROP_FRAME_HEIGHT, 720)
    
    img = cv.QueryFrame(capture)
    
    print "Captured "
    cv.SaveImage("output.jpg",img)

Read More

Law of Large Numbers, Visualized

As a follow up on the post “How Common Is Your Birthday – 360 degrees” we are getting our own data from OCCMundial to compare México with the US data from the NYTimes. We made several runs with different dataset sizes and, as a byproduct of this, we got a visualization that shows how the probability distribution of the birthday rank becomes apparent as the dataset size increases.

Check the visualization and source code (in Processing.js) here.

Rank for more common birthdays along the year. Whiter is highest rank, so most common birthday. Data for a random sample of 5,000, 100,000, 1.5 million and 4.5 million records. As dataset size increases, the real distribution becomes apparent. Data from México (OCCMundial).

Read More

Visualization: How Common Is Your Birthday – 360 degrees

I’ve done a visualization based on this one by Matt Stiles. Use the mouse to locate a birth date. I’ve a added a red arc that indicates the most probable conception date for the given birth date based on an average pregnancy of 39.5 weeks. January 1st is at 0 degrees and the year goes clockwise. I think that a circle gives a better impression on what’s happening around the year. Check the gaps at some special dates likes July 4th, Christmas and Thanks Giving weeks.

I hope to have another visualization with our own data soon.

Images licensed under Creative Commons Attribution 3.0 Unported. These files are licensed under the Creative Commons Attribution 3.0 Unported license. You are free: to share, to copy, distribute and transmit the work to remix or to adapt the work Under the following conditions: attribution: You must attribute the work in the manner specified by the author or licensor (but not in any way that suggests that they endorse you or your use of the work). Code is OpenSource bajo MIT License.

Read More