Color: Better late than never
For a blog boasting the title, Colored Pixels, we are curiously bereft of colorful examples. None of the previous activities discussed here have dealt exclusively with color; instead, all of our image processing techniques thus far, have been applied to grey scale images. In this activity, we will now focus our attention to manipulating colored digital images.
Color is a property of an object that is visually perceived by a human and is classified according to categories like red, blue, yellow, etc. This happens when a spectrum of light (from an illuminated object) interacts with the receptor cells in the eye. In objects and materials, color categories are also associated with physical properties such as light absorption, reflection, or emission spectra. Human vision is considered trichromatic, i.e. a minimum of three primary colors combined in different proportions allow all (or a very wide range ~ 10 million) of possible colors to be represented. However, due to the variations in the spectral sensitivities of these receptors among humans, color perception may be a subjective process: different persons perceive the same illuminated object differently.
Colored Pixels
Unlike texture, which is the property of an area, color in digital images is a property of a pixel. In order to quantify color, color spaces are often used. Colors can be organized according to some criteria and represented in a graph or a plot (2D or 3D). This is done so that a specific color can be referenced as a coordinate to the graph or plot representation. Often these coordinates are triples (e.g. RGB), or quadruples (e.g. CMYK). Examples of color spaces are CIELAB and CIEXYZ.
![]() |
Comparison of some RGB and CMYK colour gamuts |
Early attempts at using color as a metric for scientific and practical applications have struggled because captured color (film, or digital) is very UNSTABLE. It is highly dependent on the material (e.g. surface reflectivity), the environment (e.g. lighting condition), as well as the properties of the capture device (e.g. camera sensitivity).
![]() |
Effects of Ligting Condition to the Captured Image |
Selfie for Science
In this activity we learn the basics of face detection using color image processing techniques.
You may have noticed while operating your digital camera how a free-moving rectangle outline manages to zoom onto a face's location and often perform centering and auto-focusing on that face. This is an ubiquitous application of color image processing techniques. Another application that already made its way into the market include is face recognition technology. Hardware implementations have been integrated in most modern notebook computers, as well as security appliances.
We shall begin our study of face detection with a simple model of the skin. Using sample regions from the forehead and the cheek, we computed for the normalized chromaticity coordinates (NCC) representation of each sample.
![]() |
Normalized Chromaticity Coordinates (NCC) |
This is done by normalizing 2 of the color channels, e.g. R, G, over the total for all channels I to obtain the coordinate representations in the rg space. The third coordinate b is optional because it can be derived from the other two.
![]() |
(a) |
![]() |
(b) |
![]() |
(c) |
Normalized chromaticity coordinates for (a) full image, (b) forehead, (c) left cheek sample
As shown in the results above, the pixels corresponding to the forehead and left cheek regions are localized in the NCC space. This region is the skin locus. With these samples we are able to detect the face in the image. This can be done by setting the pixels which does not correspond to the skin regions to zero, or by histogram backprojection.
![]() |
Face detection: histogram backprojection using NCC of skin samples |
Skin locus under different lighting conditions
It now remains to demonstrate the effects of lighting condition on skin locus. If possible, we shall also determine the challenges affecting the implementing this in a more real-time setting. We took a 19 second video where we walk through an area of varying lighting conditions. Shown here are 9 still images from the video. We computed for the skin locus of the forehead region and plotted them into NCC space.
![]() |
Skin locus under different lighting conditions |
Even under varying lighting conditions, the skin locus is remarkably robust, being localized in NCC space. Using the skin locus, and blob detection on the color histogram backprojected image, it is possible to track the face (or any object with a similar skin locus profile). It is possible to model the upper and lower boundaries of this curve using two polynomials. This is useful because we may not need to apply explicit auto-white balancing or sample other frames under different lighting conditions as we have done here. The computations are quite straightforward and easily implemented.
References
- Wikipedia contributors, Color, Wikipedia, The Free Encyclopedia, https://en.wikipedia.org/wiki/Color
- Rolf Kuehni (2010) Color spaces. Scholarpedia, 5(3):9606
- Maricor Soriano, Birgitta Martinkauppi, Sami Huovinen, Mika Laaksonen, Adaptive skin color modeling using the skin locus for selecting training pixels, Pattern Recognition, Volume 36, Issue 3, March 2003, Pages 681-690, ISSN 0031-3203, http://dx.doi.org/10.1016/S0031-3203(02)00089-4
- Soriano, M (2015), Color Features, lecture notes distributed in Physics 301 - Special Topics in Experimental Physics (Advanced Signal and Image Processing) at National Institute of Physics, University of Philippines Diliman, Quezon City on 12 September 2015.
No comments:
Post a Comment