Research on Facial Pose Parameters Estimation Based on Feature Triangle

Category: Whitepapers
Chengyun Liu, Faliang Chang, Zhenxue Chen
Abstract: Facial pose estimation is a key technique which affects the face image applications. Also, the pose parameters are also the important ones in the algorithm of face analysis. It has been introduced in this paper about the pose parameters estimation based on Adaboost algorithm and face feature triangle. Firstly, Adaboost algorithm is used to train facial feature detector, and feature points are got according to facial geometric structure. Then, feature points are used to construct facial feature triangle. Finally, when facial pose varies, the parameters could be estimated according to the position of facial feature triangle. Experimental results show that the algorithm can estimate pose parameters with Adaboost algorithm and facial feature triangle effectively.

Download this Whitepaper.

1. Introduction

Pose parameters estimation is to search human faces from an input image, and output the description including the position and size information of the faces. As a special case of pattern recognition, face detection has been obtained great attention [1]-[3]. Many scholars are researching on face detection and recognition system, and the various approaches are proposed for face detection: Stan Z. Li proposed an algorithm for learning a boosted classifier for achieving the minimum error rate [4]. M. Debasis presented a novel face detection approach based on a convolutional neural architecture [5]. S. Romeil presented a face detection method using spectral histograms and support vector machines (SVMs) [6]. P. Viola proposed a face detection algorithm for color images in the presence of varying lighting conditions [7]. Y. Li proposed a novel face detection method based on the MAFIA algorithm [8]. S. Romdhani developed an efficient face detection scheme that can detect multiple faces in color images with complex environments and different illumination levels [9]. M. Javier presented a color based on the Adaboost face detection method [10].

In fact, skin-color feature is a basic and effective feature of human faces, and is widely used in face detection. In the past decades, many researches rebuilt skin model in different color spaces, and proposed many face detection algorithms based on skin-color [9]-[12]. But skin-color feature is susceptible to interference caused by non-face regions that are similar to skin color. Many algorithms can apply only to simple background cases, but the false detection rate is higher in complex background. This paper proposes an improved algorithm based on skin-color to detect human face, and this algorithm is effective to locate face regions from complex backgrounds.

2. Facial feature points location based on Adaboost algorithm

2.1 Adaboost algorithm

Adaboost algorithm is a method which adds a great deal of weaker classifiers into a stronger one. There are many advantages about Adaboost algorithm, such as higher recognition rate and fast train speed, which makes face detection available.

The keys in Adaboost algorithm are rectangle features (shown in Fig.1), integral image and cascade classifier. The rectangles features are defined according to the gray difference of the facial vary regions. The integral image is a computation method about rectangle features. The function of cascade classifier is to select the valuable one from all rectangle features, and the aim is to structure a classifier.

(a) (b) (c)

Fig.1 Diagrams of facial rectangle features

2.2 Classifier training and facial features location

Facial features location is important in facial and expression recognition. So, there are many methods proposed. A method based on Adaboost algorithm in Ref. [7] is to detect eyes and mouth. In this paper, a new method is presented to locate the feature points. Firstly, facial will be detected according to facial detector trained. Then, in facial region, the eyes and mouth will be located.

1) Facial classifier training and detection

The training of face detector is a key part of feature points. The training samples selected, in this paper, are from the face images with more poses in different background. The choice of threshold will affects the detection rate and accuracy. In training processing, the threshold is selected as possible as small. If detection rate is improved nearly maximum, the false rate will be higher too. In this paper, suitable thresholds are chosen, and the high detection rate can be 96%, while the false rate is about 12%.

The false detection part can be removed by prier knowledge about the face. This method can overcome the low detection rate of Adaboost face detection in multi-pose, and has a high robustness.

2) Eyes classifier training and detection

Also, the eyes should be detected in facial region. According to the prior knowledge, the eyes locate in the top of facial region, which can improve the detection accuracy. Eyes classifier usually selects the eyes region, but the brows and classes may take effect. To solve these problems, in this paper, the eyes samples used in training are images include eyes and parts of brows. The sizes of these images are all 40*40 by normalization. The eyes samples used in training are shown in Fig.2.

Fig.2 The eyes samples used in training

3) Mouth detection

As the mouth color changes little from skin, and when there are changes in the shape of the mouth, or in a beard case, using Adaboost training in the experimental part of the class to locate the mouth of the robustness is not high. The lower half of the face structure is simple and symmetrical, and the mouth is usually no obstacles. Binary conversion can be used to find the center of the mouth, with detection accuracy up to 95%.

The threshold of the binary conversion is calculated by the counting histogram data in the face frame. As the face color is the largest part of the picture, the most concentrated part in the histogram is face color part, whose lower limit is the threshold. After the binary conversion, the focus of the detected area will be the mouth center. Detection process is shown in Fig. 3.

Fig.3 The detection process of mouth

Finally, the eyes and mouth in face are located in Fig.4.

Fig.4 The located results of facial-features

3. Pose parameter estimation

Pose parameter estimation, also called posture detection, is mainly to calculate the face deflection angles that corresponds the three axes. Facial parameter estimation methods about triangle feature are based on face geometry structure. In some conditions, the face can be regarded as rigid approximately. When facial pose is changed, facial triangle feature will be changed corresponding to position offset (in Fig.5). It is obvious that facial pose parameter estimation will change to compute the deflection angle of facial triangle, after facial triangle has been built. So, if the three vertices of the facial feature triangle are obtained, facial post parameter will be estimated [8] [9]. Where, X axe and Z axe are shown in Fig.5, Y axe are vertical to X axe and Z axe.

Fig.5 Offset of facial-feature-triangle

The researches, in this paper, are limited to the following two cases.

1) Assuming the face only do deflection that relative to Z-axis, as shown in Fig. 6.

Fig.6 Face deflection relative to Z-axis

2) Assuming the face only do deflection that relative to Y-axis, as shown in Fig. 7.

Fig.7 Face deflection relative to Y-axis

Assuming the three feature points detected by facial features detection are: the right eye feature point , the left eye feature point , and the mouth feature point . Facial-feature-triangle is shown in Fig.8:

Fig.8 Facial-feature-triangle

The lengths of three edges of the facial-feature-triangle can be calculated according to the three vertices:




where ab and c are edges of and , respectively.

The radians of the three angles are:




When the face only does deflection that relative to Z-axis, the following facial-feature-triangle deflections will occur (shown in Fig. 9):

(a) (b) (c) (d) (e)

Fig.9 Facial-feature-triangle

From the face posture model shown in Fig. 5, each parameterof face posture in Fig. 9 is the angle offset that the facial-feature-triangle deflected according to the vertex C. So the parameters estimation of face posture problem is converted to the calculation of position of the triangle. The angle offset can be calculated that relative to the Z-axis: shown in Fig. 9 (a) and (b). When the face does the left deflection, if the coordinates of feature points are and , the left facial angle offset is ; if , the left facial angle offset is ; if , the left facial angle offset is ; shown in Fig. 9 (d) and (e). When the face does the right deflection, if the coordinates of feature points are and , the right facial angle offset is ; if , the right facial angle offset is ; if , the right facial angle offset is .

When the face does deflection relative to Y-axis, shown in Fig. 10, the facial-feature-triangle is distorted and not an isosceles triangle. Which direction the face rotates, the vertices of face feature triangle in this direction will be larger. So the following conclusion can be made: if , the face turns right; if , the face turns left.

Fig.10 Facial-feature-triangle

4. Experimental results

In order to verify the validity of the algorithm, 1023 color images have been collected randomly from vary video for our experiments, which have 537 images in simple scene and 486 images in complex scene. Using OpenCV functions, the algorithm is realized in VC++. The estimation results are shown in Fig.11.

(a) (66.7557, 62.7969, 50.4473, 24.4252) (b) (63.6194, 77.8125, 38.5681)

Fig.11 Successive samples of pose parameters estimation

The estimation rates are listed in Table 1, which shows that the estimation performances of Adaboost algorithm and facial feature triangle. We can draw a conclusion that the method of this paper maintains better estimation performance. The algorithm has fewer restrictions on front view or side view, wearing glasses or not and rotation or not, all these situations have little effect on estimation results.

Table 1: Pose parameters estimation data table

Testing images

Numbers of pose

Correct Estimation


False Estimation numbers

Estimation rate


Simple scene





Complex scene





Currently, our estimation algorithm is running on a Pentium Dual-Core CPU 2.99 GHz PC. The experimental time data in same video are listed in Table 2, which shows that the speed performances of several methods differ greatly from a time consuming standpoint. It is obvious that the Adaboost algorithm and facial feature triangle maintains better time consuming performance for pose parameters estimation, therefore the method of this paper is a real-time method. The most significant point is the discrepancy of estimation rate is not bigger.

Table 2: Elapsed time (unit: ms) and estimation rate (%) using different methods for test



Ref. [3]

Ref. [4]

Ref. [10]

Ref. [11]

Proposed method

Computational cost






Estimation rate






To show the robustness of the proposed method, we performed a noise sensitivity test. The simple scene test is created by adding Gaussian noise with different SNR values. Table 3 shows the results. As the rate of noise increases, the rate of correct estimation decreases. However, the proposed system shows an average 91.7% of correct estimation on the SNR -3dB noise added test images. This shows the robustness of the proposed method to the noise.

Table 3: Results of sensitivity analysis

SNR (dB)






Estimation rate (%)






Correct number






5. Conclusions

In this paper, Adaboost algorithm is used in detecting and locating face feature points. Based on this algorithm, the feature points consist of face feature triangle. When the face pose has been changed, face feature triangle parameters will be detected and estimated. This method obtains the better pose parameters estimation performances. Pose parameters estimate based on Adaboost algorithm and facial feature triangle will be used in Fatigue Driving and Human-Machine Interaction.


This work has been supported by National Natural Science Foundation of China (61273277 and 61203261), Natural Science Foundation of Shandong Province of China (ZR2011FM032 and ZR2012FQ003), Doctoral Fund of Ministry of Education of China (20090131120039) and State Key Laboratory of Robotics and System (HIT) (SKLRS-2010-MS13).


[1] R. Brunelli. Estimation of pose and illuminant direction for face processing. Image and vision computing, 15 (3): 741-748, 1997.

[2] W. Li, E. J. Lee. Multi-pose face recognition using head pose estimation and PCA approach. International Journal of Digital Content Technology and its Applications, 4 (1): 112-122, 2010.

[3] V. Roberto, S. Nicu, G. Theo. Combining head pose and eye location information for gaze estimation. IEEE Transactions on Image Processing, 21 (2): 802-815, 2012.

[4] Z. Lis Learning multi-view face subspaces and facial pose estimation using independent component analysis. IEEE Trans. Image Process, 14(6): 705-712, 2005.

[5] M. Debasis, D. Santanu, M. Soma. Automatic feature detection of a face and recovery of its pose. Communicated to Journal of IETE, Washington, 505-511, 2009.

[6] S. Romeil, D. Samuel and Y. Anthony, et al. A nonrigid kernel-based framework for 2D-3D pose estimation and 2D image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 33 (6): p 1098-1115, 2011.

[7] P. Viola, M. Jones. Rapid Object Detection using a Boosted Cascade of Simple Features. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, Kauai, Hawaii, USA, 2009.

[8] Y. Li, S. Gong and H. Liddell. Recognising trajectories of facial identities using kernel discriminant analysis. Image and Vision Computing, 21 (13-14): 1077-1086, 2008.

[9] S. Romdhani, V. Blanz and T. Vetter. Face identification by fitting a 3D morphable model using linear shape and texture functions. Proceeding in 7th European Conference on Computer Vision, 3-19, 2009.

[10] M. Javier, E. Ali and B. George, et al. Integrating perceptual level of detail with head-pose estimation and its uncertainty. Machine Vision and Applications, 21 (1): 69-83, 2009.

[11] K. Volketl, B. Sven and S. Gerald. Efficient head pose estimation wikl gabor wavelet networks. Brat Britain,Bristol, Great, 12: 11-14, 2010.

[12] C. Chong, S. Dan. A particle filterin framework for joint video tracking and pose estimation. IEEE Transactions on Image Processing, 19 (6): 1625-1634, 2010.

Other News

Safe Harbor Introduces TriCaster Replay Controller
It's a Wrap for SIGGRAPH Asia 2014
The Wow of 3D-Printing Toys
Run Derby, Run!
More News Headlines
Most Read