OpenCV and Emotion Recognition Using Intel Galileo Gen 2

Using OpenCV

This is part of the book “Intel Galileo and Intel Galileo Gen 2 – API Features and Arduino Projects for Linux Programmers”You can download for free using this link


Open source Computer Vision (OpenCV) is a set of cross-platform libraries containing functions that provide computer vision in real time.

Open source Computer Vision (OpenCV) is a set of cross-platform libraries containing functions that provide computer vision in real time.

OpenCV is huge framework and there are some basic functions needed to capture and process videos and images V so that they can communicate with input devices, such as a webcams. This chapter introduces the basic concepts needed to build powerful applications with your Intel Galileo board. The project will focus on how to connect a webcam to Intel Galileo, how the webcam works in Linux, how to capture pictures and videos, how to change the pictures with OpenCV algorithms, and how to detect and recognize faces and emotions.

BSP (board support package) SD card images of the Intel Galileo board support OpenCV and allow projects like the one in this chapter to be developed.

Several programs and tasks will be executed in this project. They are divided into Video4Linux and OpenCV categories as follows:

  1. Identify the capabilities of webcam with V4L2.

  2. Capture pictures using V4L2.

  3. Capture videos using V4L2.

  4. Capture and process images with OpenCV.

  5. Incorporate edge detection in your pictures with OpenCV.

  6. Incorporate face and eye detection with OpenCV.

  7. Detect emotions with OpenCV.


Note that the V4L2 examples use C and the OpenCV examples are written in C++ and Python. This is done to illustrate the performance of OpenCV in different languages and its cross-platform capabilities.

OpenCV Primer

OpenCV was developed by Intel research and is now supported by Willow Garage under the open source BSD license.

But what is computer vision and what is used for? Computer vision is the ability to provide methods and algorithms that help computers interpret the environment around them. Human eyes are able to capture the environment around us stereographically. They send the images to our brains, which interpret the images with a sense of depth, format, and dimension to all the components that compose an image.

For example, when you look at a dog in a park, you can tell how far the dog is from you, where exactly the dog is, whether you know the dog and his name, the format of the objects in the park such as sandboxes, trees, and parked cars, if it is going to rain or not, and so on.

A three-month old baby can identify objects and faces in a process that looks so natural for human beings.

What about computers? How do we program computers to use the same kind of analysis and come to the same conclusions when analyzing a simple picture of the park?

Several mathematic models, static data, and machine learned methodologies have being developed hat allow computers to “see” the world and understand the environment around them.

Robots use computer vision to assemble cars, recognize people, help patients in hospitals, and replace astronauts in dangerous missions in the space. In the future they will be able to replace soldiers in the battlefield, perform surgeries with precision, and more.

The OpenCV libraries offer a powerful infrastructure that enables developers to create sophisticated computer vision applications, abstracting all mathematic, static, and machine learning models out of the application context.

It is important to understand how V4L2 works because sometimes OpenCV throws some “mysterious” messages related to issues with V4L2. If you focus exclusively in OpenCV, it will be difficult to understand what is going on and how to fix these issues. These “mysterious” messages are related to V4L2 and not to OpenCV, which can be confusing.

If you need more details about how the algorithms works, visit the OpenCV website ( and improve your knowledge with books dedicated exclusively to OpenCV and image processing.

Project Details

This project requires a webcam to serve as Intel Galileo’s “eyes” to capture pictures and videos and apply algorithms using OpenCV. If you are using Intel Galileo, you will also need an OTG-USB adapter in order to connect the webcam because, unlike Intel Galileo Gen2, Intel Galileo does not have an OTG-USB connector.

You’ll need to generate a custom BSP image that contains all the tools and software packages that will be used. You can also download the BSP image from the code folder and copy it to the micro SD card, which will save you hours building with Yocto. The tools and ipks packages used in this chapter require more space than the SPI images can support, thus a micro SD card is necessary.

To focus directly on the OpenCV examples, it is necessary to understand the capabilities of your webcam, like the resolution, encodes, and frames per second that are supported. Understanding these capabilities using V4L2 will prevent you from wasting hours trying to decipher errors that in fact do not come from OpenCV but from V4L2.

Continue reading

My book is available now !!!!

After 11 months working all weekends and mostly all nights (late nights) and besides my duty on Intel, the book “Intel Galileo and Intel Galileo Gen 2: API Features and Arduino Projects for Linux Programmers” is ready in the printers and it will be available for shipping.

One of the conditions that make me to accept the proposal to write this book is “the digital book must be FREE”. The focus is to attend our maker/open-source community, colleges or any individual interested in our boards… You will be able to download the digital content on Apress website or you can order the hardcopy here

The book has a little more than 700 pages (in fact was more than 800 but we reduced) and covers several topics like details of our Yocto build process, the integration of 7160 LTE modem, OpenCV with emotion recognition, 6 analog board control channels to control robotic arms, and much more info.

The front matter is below:

 What is in the book ?

Chapter 1 discusses the hardware design of Intel Galileo and Intel Galileo Gen 2, and the construction of serial and FTDI cables for debugging using Linux terminal consoles.

Chapter 2 explains how the Yocto build system works, and how to generate your custom SPI and SD card images. It also presents how to compile, install and use the toolchains for native applications development, and discusses procedures to recover bricked Intel Galileo boards.

Chapter 3 shows how to install and use Arduino IDE;  how to install the drivers needed in the computer or virtual machine used, running real examples of interacting sketches with simple circuits. It also brings a practical project that integrates Python, POSIX calls and sketches to send an alert when an email is received.

Chapter 4 discusses the new APIs and hacking techniques created especially for Intel Galileo and Intel Galileo Gen 2 boards. It contains a broad discussion about clusters architecture, how GPIOs are distributed and their respective speed limits.

A practical project of how to overcome Intel Galileo’s limitation and make the temperature sensor DHT11 work is presented.

Chapter 5 presents networking APIs and hackings using Ethernet adapter and Wi-Fi mPCIe cards. Also explains how to install new Wi-Fi cards and how to share Internet access between Intel Galileo and computers. This chapter also explains how to hack the Arduino IDE to download sketches using network interfaces instead of USB.

Chapter 6 is a practical project about tweeting using Intel Galileo boards with new OAuth authentication without intermediary computers or servers. The project uses RTC (Real Time Clock) with external coin batteries and Wi-Fi mPCIe cards.

Chapter 7 shows techniques to use V42L and OpenCV libraries and how to capture images, videos and detect face and emotions using a webcam. This chapter also explains how to change the Linux BSD to support eglibc instead uClibc and generate the toolchain to compile C/C++ programs. There are also examples of OpenCV in Python.

Chapter 8 presents a low cost project to create moisture sensors based in scrap materials and galvanized nails.

Chapter 9 shows a practical home automation project implementing a webserver using node.js , interacting with multiples sensors as motion and temperature, keypads and switch relays.

Chapter 10 explains how to install and use PoE (Power of Ethernet) modules with Intel Galileo Gen 2.

Chapter 11 discusses basic principles in robotics, how to design and control a robotic arm using analog controllers and shows a practical project using a 6 DOF robotic arm with a mechanical gripper and another one built with ground coffee.

Chapter 12 discusses how to connect a XMM 7160 LTE modem and use data channels in real networks using Intel Galileo boards.

Chapter 13 is only available online. It shows a practical project of how to design and build a low cost robot head with animatronic eyes and mouth that  expresses emotions. This chapter is available online at, under the Source Code/Downloads tab.

What about Intel Edison ?

In few months we will have a “mini version” of this book including Intel Edison and the digital version will be also FREE.

Many thanks for ALL!!! I hope you appreciate the book.

Debugging tombstones with ndk-stack and addr2line


I really like to work with several components in a system including linux kernel or keep my emotions on userspace.

If you work with android and your are a real engineer, it is very difficult to resist the native support using Android NDK. If your software requires performance using graphic API like in OpenGL or if you need to access some specific information provided by some native library, the NDK fits for you.

However, bugs in native side sometimes takes time and usually it is not an easy and fast task. If you have the device and an easy way to reproduce the issue, it is ok but suppose you need to collect logs from remote users and the scenario is very difficult to reproduce. Some cases the users even see the crash visually but the system contains the logs blocking the approval of your software that must be completed to some costumer.. at this point the program/project managers team are “talking” in your ears… “fix it!”.

Working with Android, every time a process that runs on native side crashes, we have some small pieces of your stack in files called tombstones.

The tombstones are located at /data/tombstones as isolated files (one files represents one crash) or you can see them in your logcat Take a look in the adb shell:

root@android:/ # find . |grep tombs

root@android:/ #

The tombstone inform you about:

  1. Build fingerprint
  2. Crashed process and PIDs
  3. Terminated signal and fault address
  4. CPU registers
  5. Call stack
  6. Stack content of each call

I will not post a full tombstone here. Check the /data on your device and you will observe the 7 sessions mentioned above . Let’s go straight to the point.. how to debug the stack in tombstones files!

Continue reading

Android Services: Recommendations when using AIDL for continuous service and low current drain impact

Last month a lot things happened.. I lost my dad, moved to another location .. thus, this blog was not the priority but was not forgotten.

This post is related to some issues I have observed in some Dalvik implementation and some recommendations is you want to create a service that must remains running all the time even after a device power on/off or reset.

Suppose you need to create a “light” service that must be:

1) starts when every time your devices boots.

2) contains very nice parcelable objects using aidl syntax sharing them thru IPC

3) is used to monitor something every X minutes but cannot impact the current drain.

4) the service must use the “uptime”, I mean, must reports “how long” is running

5) must be able to receive interruptions and reports them

A good application of this service could be an alarm central unit for your car. You could create this project using your old Android phone and transforming your old phone and hidden it in your car. You can also connect this old phone disassembling a 12V charging and connecting in some spot using the terminals of your battery.

So, let’s go to some mistakes.. see the list below:

Continue reading

POSIX Threads and Joins – Parallel Programming – PART 1

During a long time we were “blessed” with computers that had only single CPU. The single CPU usually had a mediocre single core and thread. All software developed on that time executed sequential processing.

However, multi-cores and multi-threads processors were introduced in the market but if you do not develop your software trying to explore the hardware advantages your software might have the same performance if running in a single cpu machine.

You need to think also if you software will run in a computer isolated or if your software will be able to run in multiple computers sharing a common network and organize different tasks to resolve a common problem.

Today, I will write this post for my own reference using the POSIX threads and how to have the best usage of joins and mutex implementation. I am running in a ubuntu 10.04 but if you are running Windows you can use these examples installing the POSIX win32 compatibility module or installing Cygwin to simulate a linux shell.

Code compilation

All codes on this post were compiled using gcc and invoking the POSIX libraries thru command like under ubuntu 10.04. For example you can use:

gcc -pthread <my source code.c>


The first code is related to a model where we do not share the same memory space and there is no policy regarding the thread priorities or resources access management (although the threads runs in the same process sharing the same memory space this first example to resource are conflicted).

It is a simple code that creates 3 threads, passes arguments to them using strings and integers and then they are terminated.

Take a look in the code below:

Continue reading

Newton Raphson C++ class for float division and other functions

Some months ago when I was searching for a job, a company asked to write a class to evaluate float divisions and find srq() of a number using Microsoft Visual C++. But the challenge was in create a class to perform this operations avoiding the division operator “/”.

I decided to use a old methodology I learnt when I was in college, more specifically in the computer methods. The name of method is Newton Raphson.

About the Newton-Raphson’s Method

The idea is very simple… given a function you guess a initial x0. Then you evaluate the tangent function on that point using derivative function. You will have x1… then you do the same with x1, and even so… Continue reading