# I. Introduction he human face plays an important role in our social interaction for conveying people's identity. Face recognition is an ability to recognize people by their unique facial characteristics. Biometric-based technologies include identification based on physiological characteristics such as face, fingerprints, finger geometry, hand geometry, hand veins, palm, iris, retina, ear and voice and behavioural traits such as gait, signature and keystroke dynamics. Face recognition can be done passively without any explicit action or participation on the part of the user since face images can be acquired from a distance by a camera. Iris and retina identification require expensive equipment and are much too sensitive to any body motion. Voice recognition is susceptible to background noises in public places and auditory fluctuations on a phone line or tape recording. Signatures can be modified or forged. However, facial images can be easily obtained with a couple of inexpensive fixed cameras. Face recognition is totally non-intrusive and does not carry any such health risks. Author: PG Scholar, Hindusthan College of Engineering and Technology, Coimbatore, India. e-mail: revathysreeni14@gmail.com # II. Related Works Lowe used a modification of the k-d tree algorithm in this method and is called the Best-bin first search method that can identify the nearest neighbors with high probability using only a limited amount of computation [12]. The BBF algorithm uses a modified search ordering for the k-d tree algorithm so that bins in feature space are searched in the order of their closest distance from the query location. This search order requires the use of a heap-based priority queue for efficient determination of the search order. The best candidate match for each key point is found by identifying its nearest neighbour in the database of key points from training images. When an image contains complex background, the SIFT descriptors tend to spread over the entire image rather than being concentrated in the object region. As a result, the actual object can be neglected in the matching process. Since the number of extracted SIFT descriptors is typically large, the computational cost to match extracted key-points is very high. Adaptive matching algorithm is used to match the person according to the trained values. The test image feature is compare with the trained image dataset if the test image feature and trained image feature is similar the person is matched person else the person is not matched. trained When the probe image is given, the algorithm may compare the features of the input image with the trained database and determines the results as authenticated [18]. The algorithm may conclude its results only after all the features in the image are matched and hence this process is much superior to others. Different poses are not applicable in this method. The template matching is simple technique for image processing topics like feature extraction, edge detection, object extraction. Template matching can be subdivided between two approaches: feature-based and template-based matching. The feature-based approach uses the features of the search and to recognize a human face, some special features need to be extracted. These special features include eyes, nose, mouth, and chin along with the shape of the face. Initially, the subject image is enhanced and segmented. Then the contour features of the face are extracted by contour extraction method, and compares with the extracted features of the database image [4]. If there is a match then the person in the subject image is recognized. An important goal of this adjustment is to render specific colours particularly neutral colours correctly; hence, the general method is sometimes called gray balance, neutral balance, or white balance. # III. Proposed Method The most important technique for removal of blur in images due to linear motion is the Wiener filter. T he first process that was performed was creating a point spread function to add blur to a video. The blur was implemented by first creating a PSF filter in Mat Lab that would approximate linear motion blur. This PSF was then convolved with the original image to produce the blurred video. Convolution is a mathematical process by which a signal, in this case the image is acted on by a system the filter in order to find the resulting signal. The amount of blur added to the original video depends on two parameters o f the PSF are the length of blur in pixels, and the angle of the blur. After a known amount of blur was introduced into the video, an attempt was made to restore the new blurred video to its original form. The Weiner filter is an inverse filter that employs a linear deconvolution method. Linear deconvolution means that the output is a linear combination of t two common types of impulse noise are the salt-and-pepper noise and the random-valued noise. For images corrupted by salt-and-pepper noise respectively random-valued noise, the noisy pixels can take only the maximum and the minimum values respectively any random value in the dynamic range. There are many works on the restoration of images corrupted by impulse noise. See, for instance, the nonlinear digital filters. For images corrupted by Gaussian noise, leastsquares methods based on edge-preserving regularization functional have been used successfully to preserve the edges and the details in the images. These methods fail in the presence of impulse noise because the noise is heavy tailed. Moreover the restoration will alter basically all pixels in the image, including those that are not corrupted by the impulse noise. Recently, nonsmooth data-fidelity terms have been used along with edge preserving regularization to deal with impulse noise. We propose a powerful two-stage scheme which combines the variation method proposed in with the adaptive median filter. More precisely, the noise candidates are first identified by the adaptive median filter, and then these noise candidates are selectively restored using an objective function with a data fidelity term and an edge-preserving regularization term. Since the edges are preserved for the noise candidates, and no changes are made to the other pixels, the performance of our combined approach is much better than that of either one of the methods. Salt and pepper noise with noise ratio as high as 90% can be cleaned quite efficiently. The median filter was once the most popular nonlinear filter for removing impulse noise, because of its good denoising power and computational efficiency. However, when the noise level is over 50%, some details and edges of the original image are smeared by the filter. Different remedies of the median filter have been proposed, like the adaptive median filter the multi-state median filter or the median filter based on homogeneity information. These so-called decision-based or switching filters first identify possible noisy pixels and then replace them by using the median filter or its variants, while leaving all other pixels unchanged. These filters are good at detecting noise even at a high noise level. Their main drawback is that the noisy pixels are replaced by some median value in their vicinity without taking into account local features such as the possible presence of edges. Hence details and edges are not recovered satisfactorily, especially when the noise level is high. Face-detection algorithms focus on the detection of frontal human faces. It is analogous to image detection in which the image of a person is matched bit by bit. Any facial feature changes in the database will invalidate the matching process. The viola-Jones object detection framework is the first object detection framework to provide competitive object detection rates in real-time primarily by the problem of face detection. The human can do this easily, but a computer needs precise instructions and constraints. To make the task more manageable, viola-Jones requires full view frontal upright faces. The algorithm has four stages, Haar Feature Selection, Creating an Integral Image, Adaboost Training, and Cascading Classifiers. The features of the detection framework involve the sums of image pixels within rectangular areas and it bear some resemblance to Haar basis functions, which have been used previously in the realm of image-based object detection. However, since the features used by Viola and Jones all rely on more than one rectangular area, they are generally more complex. The value of any given feature is always simply the sum of the pixels within clear rectangles subtracted from the sum of the pixels within shaded rectangles. The rectangular features of this sort are rather primitive when compared to alternatives such as steerable filters. Although they are sensitive to vertical and horizontal features, their feedback is considerably coarser. The Haar features are the features which are used to match the similar properties of the human faces. Some of the similar properties related to human face are generally considered as, the eye region is darker than the upper-cheeks and the nose bridge region is brighter than the eyes. Composition of properties forming matchable facial features is, Location and size are eyes, mouth, and bridge of nose Value is oriented gradients of pixel intensities. Rectangle features considered is given by the equation Value = ? (pixels in black area) -? (pixels in white area).Three types: two-, three-, four-rectangles, Viola & Jones used two-rectangle features. For example: the difference in brightness between the white &black rectangles over a specific area where each feature is related to a special location in the sub-window. An image representation called the integral image evaluates rectangular features in constant time, which gives them a considerable speed advantage over more sophisticated alternative features. Because each feature's rectangular area is always adjacent to at least one other rectangle, it follows that any two-rectangle feature can be computed in six array references, any three-rectangle feature in eight, and any four-rectangle feature in ten. The speed with which features may be evaluated does not adequately compensate for their number, however. For example, in a standard 24x24 pixel sub-window, there is a total of M=162,336 possible features and it would be prohibitively expensive to evaluate them all when testing an image. Thus, the object detection framework employs a variant of the learning algorithm Ad boost to both select the best features and to train classifiers that use them. This algorithm constructs a "strong" classifier as a linear combination of weighted simple "weak" classifiers. ?(??) = ???????? ? ?? ???(??) ?? ?? =1 ?(1) Each weak classifier is a threshold function based on the feature f j ???(??) = ? ????? ???? ???? < ???? ???? ?????????????????(2) The threshold value ???? and the polarity ???? ? ±1 are determined in the training, as well as the coefficients? ??. Here a simplified version of the learning algorithm is reported as, Input: Set of ?? positive and negative training images with their labels (?? ?? , ?? ?? ). If image i is a face?? ?? = 1, if not?? ?? = ?1. Initialization: assign a weight ??1 ?? = 1 # ?? to each image i. For each feature Fj with J = 1,??, M Apply the feature to each image in the training set, and then find the optimal threshold and polarity?? ?? ,?? ?? that minimizes the weighted classification error. That is ????, ???? = arg min ? ???? ?? ???? ?? ?? ??=1(3) Where, ???? ?? = ? 0 ???? ?? ?? = ???(?? ?? , ????, ????) 1 ????????????????? Assign a weight ? ?? to ? ?? that is inversely proportional to the error rate. In this way best classifiers are considered more. The weights for the next iteration, i.e. ?? ?? +1 ?? , are reduced for the images ?? that were correctly classified. ?(??) = ???????? ? ?? ???(??) ?? ?? =1 ?(4) A feature is defined as an interesting part of an image, and features are used as a starting point for many computer vision algorithms. Since features are used as the starting point and main primitives for subsequent algorithms, the overall algorithm will often only be as good as its feature detector. and making local decisions at every image point whether there is an image feature of a given type at that point or not. The resulting features will be subsets of the image domain often in the form of isolated points, continuous curves or connected regions a feature is defined as an interesting part of an image and features are used as a starting point for many computer vision algorithms. Since features are used as the starting point and main primitives for subsequent algorithms, the overall algorithm will often only be as good as its feature detector. Consequently, the desirable property for a feature detector is repeatability whether or not the same feature will be detected in two or more different images of the same scene. Scale-space theory is a framework for multi scale signal representation developed by the Computer vision, image processing and signal processing communities with complementary motivations from physics and biological vision. It is a formal theory for handling image at different scales, by representing an image as a oneparameter family of smoothed images, the scale-space representation, parameterized by the size of the smoothing kernel used for suppressing fine-scale structures. The parameter ?? in this family is referred to as the scale parameter, with the interpretation that image structures of spatial size smaller than about ??? have largely been smoothed away in the scale-space level at scale ??. # v. Decision Based Probablistic Neural Network A Probabilistic Neural Network is defined as an implementation of statistical algorithm called Kernel discriminate analysis in which the operations are organized into multilayered feed forward network with four layers: input layer, pattern layer, summation layer and output layer. A PNN is predominantly a classifier since it can map any input pattern to a number of classifications. Among the main advantages that discriminate PNN is Fast training process, an inherently parallel structure, guaranteed to converge to an optimal classifier as the size of the representative training set increases and training samples can be added or removed without extensive retraining. Pattern layer/unit: there is one pattern node for each training example. Each pattern node/unit forms a product of the input pattern vector ?? (for classification) with a weight vector ????, ???? = ??. ????, and when perform a non linear operation on ???? before outputting its activation level to the summation node/unit. Here instead of the sigmoid activation function of the nonlinear operation is used, exp [(???? ? 1)/??2](5) Both ?? and Wi are normalized to unit length, this is equivalent to using the exp [?(???? ? ??)??(???? ? ??)//2??2). Summation layer/unit: each summation node/unit receives the outputs from pattern nodes associated with a given class. It simply sums the inputs from the pattern units that correspond to the category, from which the training pattern was selected, ? ??, exp [? (???????)??(???????) 2??2 ].(7) Output layer/unit: the output nodes/units are two-input neurons. These units product binary outputs, related to two different categories ????, ????, ?? ? ??, ??, ?? = 1,2 ? . ??, by using the classification criterion, ? ??, exp ?? (???? ???)??(???????) 2??2 ? > ? ?? exp [? (???? ???)??(???? ???) 2??2 ](8) These units have only a single corresponding weight C, given by the loss parameters the prior probabilities and the number of training pattern in each category. Concretely, the corresponding weight is the ratio on a priori probabilities, divided by the ratio of samples and multiplied by the ratio of losses. This ratio can be determined only from the significance of the decision. There were developed non parametric techniques for estimating univariate probability density function from random samples. By using the multivariate Gaussian approximation, that is a sum of multivariate Gaussian distributions centered at each training sample, we obtain the following form of the pdf and ?? is the vector pattern number, ?? is the total number of training patterns, ???? is the ?? ??? training pattern from category class, ? ??, ?? is the input space dimension and ? is an adjustable smoothing parameter using the training procedure. The network is trained by setting the ???? weight vector in one of the pattern units equal to each ?? pattern in the training set and then connecting the pattern unit's output to the appropriate summation. In decision based PNN, if else condition is applied for the classification process where the classification can be done by the else condition format and the authentication is also based on the if else condition with more accuracy. is developed for detection of incomplete ellipses in images with strong noise. The IRHT iteratively applies the randomized Hough transform (RHT) to a region of interest in the image space. The region of interest is determined from the latest estimation of ellipse parameters. The IRHT zooms in on the target curve by iterative parameter adjustments and reciprocating use of the image and parameter spaces. During the iteration process, noise pixels are gradually excluded from the region of interest, and the estimation becomes progressively close to the target. The IRHT retains the advantages of RHT of high parameter resolution, computational simplicity and small storage while overcoming the noise susceptibility of RHT. Indivisible, multiple instances of ellipse can be sequentially detected. The IRHT was first tested for ellipse detection with synthesized images. It was then applied to fetal head detection in medical ultrasound images. The results demonstrate that the IRHT is a robust and efficient ellipse detection method for real-world applications. # Iteration count Mapping of edge points to the Hough space and storage in an accumulator. Interpretation of the accumulator to yield lines of infinite length. The interpretation is done by thresholding and possibly other constraints. Conversion of infinite lines to finite lines. The weighted Hamming distance has been used for image retrieve, including Hamming distance weighting and the Anno Search. In each bit of the binary code is assigned with a bit-level weight, while the aim is to weight the overall Hamming distance of local features for image matching. Only a single set of weights is used to measure either the importance of each bit in Hamming space, or to rescale the Hamming distance for better image matching. Jiang et al. propose a query-adaptive Hamming distance for image retrieve which assigns dynamic weights to hash bits, such that each bit is treated differently and dynamically. They harness a set of semantic concept classes that cover most semantic elements of image content. Then, different weights for each of the classes are learned with a supervised learning algorithm. To compute the bit-level weights for a given query, a k nearest neighbour search is performed based on the original Hamming distance first, and then a linear combination of the weights of classes contained in the result list is used as the query-adaptive weights. In this section, we present the weighted Hamming distance ranking algorithm. In most binary hashing algorithms, the distance between two points is simply measured by the Hamming distance between their binary codes. This distance metric is somewhat ambiguous, since for a Kbits binary code ??(??), there are ?? ?? different binary codes sharing the same distance m to ??(??). In most binary hashing algorithms, each hash bit takes the same weight and makes the same contribution to distance calculation. On the contrary, in our algorithm, we give different bits different weights. With the bit-level weights, the returned binary codes can be ranked by the weighted Hamming distance at In this section, we present the weighted Hamming distance ranking algorithm. In most binary hashing algorithms, the distance between two points is simply measured by the Hamming distance between their binary codes. This distance metric is somewhat ambiguous, since for a K-bits binary code ??(??), ???? In most binary hashing algorithms, each hash bit takes the same weight and makes the same contribution to distance calculation. On the contrary, in our algorithm, we give different bits for different weights. With the bit-level weights, the returned binary codes can be ranked by the weighted Hamming distance at 1585 1587 a finergrained binary code level rather than at the original integer Hamming distance level. The bit-level weight associated with hash bit ?? is denoted as ð??"ð??"??. In the following, we will show that an effective bit-level weight is not only data dependent, but also query-dependent. Note that, our algorithm is not to propose a new binary hashing method, but to give a ranking algorithm to improve the search accuracy of most existing binary hashing methods. Some notations are given below to facilitate our discussion. Given a dataset ?? = {??(??)}??, ?? = 1 ?, the neighbor set of ?? is denoted as ??(??). The paradigm of binary hashing is to first use a set of linear or non-linear hash functions ?? = {??ð??"ð??": ???? ? ??}??ð??"ð??" = 1 to map?? ? ?? ???? ??(??) ? ????, and then binarize, ??(??) = (?? 1(??) ? ? ?? ð??"ð??"(??) )?? By comparing each with ?? ð??"ð??"(??) a threshold ??ð??"ð??" to get a K-bit binary code ??(??) ? {0,1}??. Hence, the binary hash function is given by the, ? ð??"ð??"(??) = ??????(?? ð??"ð??"(??) ? ?? ð??"ð??" ) we call the?? ð??"ð??"(??) unbinarized hash value. Each dimension of ??(??) is called a hash bit, and for a query q and its neighbour p, if the k-th bit of ??(??) and ??(??) is different, we call there is a bit-flipping on hash bit ð??"ð??". The weighted Hamming distance between two binary codes ?(1)and ?(2)is denoted as ?? ?? ??(?(1), ?(2)). This PSF was then convolved with the original image to produce the blurred image the most important technique for removal of blur in images due to linear motion is the Wiener filter and this process is called the illumination adjustment. The figure 13 is the removal of salt and pepper noise from the input video using median filter and the figure 14,15 shows the face and eyes detected from the input video using viola Jones algorithm. # V. Expreimental Results # a) Simulation Results of Face Recognition # VI. Conclusion Face recognition is a complicated task that requires efficient handling of complex variations in facial appearance caused by a range of factors such as, illumination variation, expressions, and aging effects. In the existing method, a highly compact and discriminative feature descriptor and adaptive matching framework is used for enhanced face recognition. But the method is not suitable for the face in side pose or different angle. In order to overcome this, a new methodology is proposed which performs face recognition under the combined effects of deblurring and denoising. In this method, Viola-Jones algorithm is initially used for detecting the face and dense feature extraction is used for feature extraction. DBPNN is used to detect the face from side pose and finally Iris detection is done by using Iterative randomized Hough transform were the pupil of the eyes is detected with number of iteration. The proposed method significantly reduces the dimension of feature representation and improved computational efficiency without sacrificing recognition performance. The future work is to recognize the face in side pose even from occluded face. 1![Figure 1 : Block diagram for face recognition](image-2.png "Figure 1 :") 2016![Dense feature detection refers to a methods that aim at computing abstractions of image information © 2016 Global Journals Inc. (US) Global Journal of Researches in Engineering ( ) Volume XVI Issue V Version I 31 Detecting and Recognizing the Face and Iris Features from a Video Sequence using DBPNN and Adaptive Hamming Distance iii. Face Detection a. Viola-Jones Algorithm b. Learning Algorithm iv. Local Feature Extraction a. Dense Feature Extraction](image-3.png "Year 2016 F") 20162![Figure 2 : Block diagram for iris recognition a) Feature Extraction Using IRHT An iterative randomized Hough transform (IRHT)is developed for detection of incomplete ellipses in images with strong noise. The IRHT iteratively applies the randomized Hough transform (RHT) to a region of interest in the image space. The region of interest is determined from the latest estimation of ellipse parameters. The IRHT zooms in on the target curve by iterative parameter adjustments and reciprocating use of the image and parameter spaces. During the iteration process, noise pixels are gradually excluded from the region of interest, and the estimation becomes progressively close to the target. The IRHT retains the advantages of RHT of high parameter resolution, computational simplicity and small storage while overcoming the noise susceptibility of RHT. Indivisible, multiple instances of ellipse can be sequentially detected. The IRHT was first tested for ellipse detection with synthesized images. It was then applied to fetal head detection in medical ultrasound images. The](image-4.png "2016 F©Figure 2 :") 4![Figure 4 : Deblurred faceThe above figure3shows the input video taken for face recognition from side pose and the figure4. Shows the deblurred face and here the blur is implemented by first creating a PSF filter in Mat Lab that would approximate linear motion blur.This PSF was then convolved with the original image to produce the blurred image the most important technique for removal of blur in images due to linear motion is the Wiener filter and this process is called the illumination adjustment.](image-5.png "Figure 4 :") 5![Figure 5 : Removal of noise from face](image-6.png "Figure 5 :") 6![Figure 6 : Face detectionThe figure5shows the removal of salt and pepper noise from face image using median filter and the figure6. shows the image of face detection from video using viola-Jones algorithm.The figure7. shows extraction of local features from detected face using multi scale feature extraction method the features from chin, cheek and ear points are](image-7.png "Figure 6 :") 3![Figure 3 : Input video for face](image-8.png "Figure 3 :") 7![Figure 7 : Extraction of local features](image-9.png "Figure 7 :") 8![Figure 8 : Face authentication](image-10.png "Figure 8 :") 9![Figure 9 : Command window from mat lab The above figure shows the output command window obtained by running the program coding and these values are used for evaluating the performance of the proposed methods applied.](image-11.png "Figure 9 :") 10![Figure 10 : performance graph of template matching versus DBPNN Figure.10 gives the performance evaluation of template matching with DBPNN where the graph is plotted between accuracy and recognition rate. b) Simulation Results of Iris Recognition](image-12.png "Figure 10 :") 12![Figure 12 : Deblurred images © 2016 Global Journals Inc. (US)](image-13.png "Figure 12 :F") 13![Figure 13 : Noise removals from video](image-14.png "Figure 13 :") 16![Figure 16 : Feature extraction](image-15.png "Figure 16 :") 17![Figure 17 : IRHT command window](image-16.png "Figure 17 :") ![Figures 18 : Authentication](image-17.png "Figures") 19![Figure 19 : Command window of mat lab by executing the program coding The figure 19. gives the values needed for performance evaluation between adaptive matching and adaptive hamming distance.](image-18.png "Figure 19 :") 17![Figure 17. shows the output obtained from the features of the eyes using IRHT method and in the figure18. Authentication of iris features can be obtained from IRHT.](image-19.png "Figure 17 .") 20![Figure 20 : Performance graph of adaptive matching versus adaptive hamming The figure 20. Shows the comparison between accuracy and recognition rate of adaptive matching and adaptive hamming distance method. c) Comparison table for face recognition method](image-20.png "Figure 20 :") Year 201629I( ) Volume XVI Issue V VersionGlobal Journal of Researches in EngineeringT © 2016 Global Journals Inc. (US) Algorithm StepsInput VideoIllumination Adjustment Using Weiner Filterb) Feature Matching using Adaptive Hamming DistanceYear 2016Noise Removal Using33Median FilterIFeature Detection Using Viola Jones Algorithm Feature Extraction Using IRHT Feature Matching Using Adaptive Hamming DistanceGlobal Journal of Researches in Engineering ( ) Volume XVI Issue V Version F© 2016 Global Journals Inc. (US) AlgorithmAccuracyRecognition RateTemplateMatching93.3330( Existing)DBPNN96.6633.33(PROPOSED)d) Comparison table for iris recognition methodTable Comparison chart of output values of IrisrecognitionRecognition Rate is calculated by,?????????????????????? ???????? =?????????? ????. ????. ???????????? ?????????????????? ???????????????? ????????????Accuracy is calculated by, 1 © 2016 Global Journals Inc. (US) * Face recognition with local binary patterns TAhonen .AHadid Pietikäinen in Proc * Eigenface vs. Fisher faces: Recognition using class specific linear projection .P NBelhumeur .JHespanha KriegmanD IEEE Trans. Pattern Anal. Mach. Intell 19 7 Jul. 1997 * General subspace learning with corrupted training data via graph embedding .B KBao HongG RLiu YanSXu IEEE Trans. Image Process 22 11 Nov. 2013 * Face recognition with learning based descriptor YinX QCao .XTang Sun Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit Jun. 2010 * Blessing of dimensionality: High dimensional Feature and its efficient compression for face verification CaoD XChen WenF Sun Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit Jun. 2013 * Histograms of oriented gradients for human detection Dalal Triggs Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit Jun. 2005 * Labeled faces in the wild: A database for studying face recognition in unconstrained environments .GHuang .TBerg Learned-Miller Univ. Massachusetts Amherst Oct. 2007 * Heterogeneous face recognition: Matching NIR to visible light images .BKlare JainA Proc. 20th Int. Conf. Pattern Recognit. (ICPR) 20th Int. Conf. Pattern Recognit. (ICPR) Aug. 2010 * Matching forensic sketches to mug shot photos .B FKlare .ZLi JainA IEEE Trans. Pattern Anal. Mach. Intell 33 3 Mar. 2011 * Learning discriminant face descriptor ZLei .MPietikainen Lis IEEE Trans. Pattern Anal. Mach. Intell 36 2 Feb. 2014 * Face recognition using kernel direct discriminant analysis algorithms JLu .KPlataniotis P VenetsanopoulosA IEEE Trans. Neural Netw 14 1 Jan. 2003 * Object recognition from local scaleinvariant features .DLowe Proc. IEEE Int. Conf. Comput. Vis IEEE Int. Conf. Comput. Vis Sep. 1999 * Gabor-based kernel PCA with fractional power polynomial models for face recognition Liu * IEEE Trans. Pattern Anal. Mach. Intell 26 5 May 2004 * Multi-scale binary patterns for texture analysis Mäenpää Pietikäinen Proc. SCIA SCIA 2003 * Fisher vector faces in the wild ParkhiK O MSimonyan .AVedaldi Zisserman pp. 8.1-8.12 Proc. BMVC BMVC 2013 * Face recognition using eigenface .M ATurk .APentland Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit Jun. 1991 * A unified framework for subspace face recognition Wang Tang IEEE Trans. Pattern Anal. Mach. Intell 26 9 Sep. 2004 * Learning Compact Feature Descriptor and Adaptive Matching Framework for Face Recognition ZhifengLi SeniorMember DihongIeee XuelongGong Li IeeeFellow DachengTao Fellow Ieee IEEE TRANSACTIONS ON IMAGE PROCESSING 24 9 SEPTEMBER 2015