# INTRODUCTION ace is the most precise and extensively used key to a person's identity. Face recognition has attracted perceptible attention in the advancement of human-machine interaction as it provides a natural and efficient way to communicate between humans and machines. In recent years considerable progress has been made in the area of face recognition through which computers can now compete favorably with humans in many face recognition tasks, particularly those in which large databases of faces must be searched. The problem of detecting and recognizing faces in real-time video sequences has become a popular area of research due to emerging applications in humancomputer interface, surveillance systems; secure access control, video conferencing, financial transaction, forensic applications, pedestrian detection, driver alertness monitoring systems, image database management system and so on. The goal of this work is to develop an efficient, real-time face recognition system that would be able to recognize a person as soon as he will be in front of camera. # II. # SYSTEM ARCHITECTURE The process of identifying/recognizing a person in this research is based on mainly three phases: i. Image Acquisition from pc camera. ii. Face Detection using Local SMQT Features and Split up SNoW Classifier. iii. Face Recognition, a) Facial Feature Extraction using Gray-Scale Morphology b) Classification using Artificial Neural Network. Internally, all pattern recognition systems have the following processes [1]. Since the output of each operation is the input to the next, the functional parts (1-6) must execute in sequence. In our research, Task 4 and 5 are combined. The size of every image (input and output) is to be kept standard so that there is better control and accuracy during matrix computation and parameter training.The project did not work well when it was first created. There were a lot of tests and changes along the way. At that time, it was decided that as long as we had enough images to give a good training to reveal the mapping structure, we should keep the algorithm and the training data simple. Fig. 1 pictorially describes the whole system. # III. FACE DETECTION USING LOCAL SMQT FEATURES AND SPLIPT UP SNOW CLASSIFIER Pattern recognition in the context of appearance based on face detection can be approached in several ways [5], [6]. Techniques proposed for this task are for example the Neural Network (NN) [7], probabilistic modeling [8], cascade of boosted feature (AdaBoost) [9], Sparse Network of Windows (SNoW) [4], [10], combination of AdaBoost and SNoW [11] and the Support Vector Machine (SVM) [12]. We used a framework for face detection which is proposed using illumination insensitive features gained from the local SMQT features and rapid detection is achieved by the split up SNoW classifier. SMQT (The Successive Mean Quantization Transform) [4] is used for automatic enhancement of gray-scale images. The SMQT enhanced image histogram retains the basic shape of the original but stretches it to explore the whole dynamic range. Hence, the SMQT adapts the shape of the histogram by performing a nonlinear stretch. The nonlinear properties of the SMQT will yield a balanced stretch of the histogram which shows in Fig. 2. The SNoW learning architecture is a multi-class classifier that is specifically tailored for large scale learning tasks and for domains in which the potential number of features taking part in decisions is very large. With the SNoW and the split up SNoW classifier, a pre trained lookup table searched for face. Overlapped detections are pruned using geometrical location and classification scores. Each detection is tested against all other detections. If one of the area overlap ratio is over a fixed threshold, then the different detections are considered to belong to the same face. Given that two detections overlap each other, the detection with the highest classification score is kept and the other one is removed. This procedure is repeated until no more overlapping detections are found. # IV. FACIAL FEATURE EXTRACTION USING GRAY-SCALE MORPHOLOGY Morphological operation [2], [14] is used as a tool for extracting image components that are useful in the representation and description of region shape namely, boundaries, skeletons, and the convex hull. Morphological processes have nonlinear and translation-invariant properties [2], while neural networks have good generalization capabilities [3]. Therefore our system is a heterogeneous network that produces highorder features based on local features extracted by morphological operations. Information obtained by mathematical morphology is highly dependent on structuring elements or kernels [2]. In its initial stage, we performed a combination of grayscale erosion and dilation known as the hit-miss transform [14]. Each input image was eroded by a hit structuring element and dilated by a miss structuring element separately. Both outputs are then deducted to derive their overlapping difference. The result from this process forms the feature map shows in Fig. 3, which becomes the direct input to a Backpropagation network [3]. The network can have one or more layers, and each layer can also have one or more feature maps. The gray-scale erosion of f by structuring element b, denoted f ? b , is defined as Where Db is the domain of b and f(x, y) is assumed to equal + ? outside the domain of f. In practice grayscale dilation usually is performed using flat structuring elements in which the value (height) of b is 0, and then the simplified gray-scale dilation is, (A) (B) Dilation and erosion can be combined to achieve a variety of effects. For instance, subtracting an eroded image from its dilated version produces a "morphological gradient" which is a measure of local gray level variation in the image and which is shown in Fig. 6. Morphological gradient = ( f ?b) ? ( f?b) # V. FACE RECOGNITION USING ARTIFICIAL NEURAL NETWORK A Neural Network based face recognition system is used for recognizing front-view face images [4]. The standard backpropagation algorithm is redesigned with multiple hidden layers to simultaneously perform hitmiss transformation, train, and classify features within the same iteration. The recognized image is determined by the corresponding output value that lies within a certain threshold. x x f y x b f ? ? ? ? ? + ? ? ? ? = ? } ) , ( | ) , ( max{ ) , )( ( b D y x y y x x f y x b f ? ? ? ? ? ? ? = ? (B) (A) } ) , ( | ) , ( ) , ( min{ ) , )( ( b D y x y x b y y x x f y x b f ? ? ? ? ? ? ? + ? + = ? } ) | ) , ( min{ ) b y x y x x f y x b f ? ? ? + ? + = ? There are virtually no tools to help us select an appropriate architecture and learning parameters for a neural network. In most cases, learning parameters are determined by experience or based on the trial and error method [3]. We may work both ways to select our best set of parameters: a. Start from a large network and successively remove some neurons and links until network performance degrades. b. Start with a small network and introduce new neurons until performance is satisfactory. Our proposed system is special because it has an extra network stage that depends on structuring elements to extract features. Feature extraction is performed over the entire image as well as its sub image in separate networks. We found that besides gray-level shifts, Won [13] and Skubic [14] were also concerned with the sensitivity of network performance to the size of the structuring element. We use the size of structure element is 2, it can be varied 1 to 3 but by testing we found that size of 2 is the best for our system. The complete system can be described in two stages: 1) Learning phase and 2) Testing phase. In the learning phase, some reference faces are selected to create a knowledge base. Facial images are acquired directly from camera device. The next step is face detection in which face region from a total image is detected. Then the step is preprocessing in which necessary enhancement is applied to improve the quality of images. The next phase is the feature extraction in which face feature is extracted which is the form of 50×50 face image matrix. These features are fed into the neural network for training and after training the weights are saved as knowledge base. # RESULT AND DISCUSSION At first to check the reliability of the system we use offline testing using ORL (Olivetti Research Laboratory) database [15] as well as our locally created database (Fig. 8), where the recognition rate was approximately 98% and 94% respectively. This rate of accuracy can be made high if the number iteration of backpropagation is 20000 or more. This accuracy also extensively depends on the sensing devices or camera and lighting conditions. If high quality camera is used and approximately constant lighting system can be managed then the recognition rate will increase. Finally, an attempt was made to recognize Human Faces using online interface, where as soon as any person present himself in front of the camera, the result of recognition procedure will be shown almost simultaneously frame by frame. For the online purpose we strictly try to reduce false recognition rather than In the testing phase, the system is allowed to recognize an unknown face. In this case, the all steps are done in the same way as in the learning stage before classification. In the phase of classification, the selected features having enough information within it are used to identify each facial image uniquely. This is done by the system using the knowledge base (weights) and extracted features of unknown face as input to the network. At the last step, final decision about the unknown face is taken by comparing the test output with the target defined for each person or class during training phase by specifying a threshold value for deciding 0 or 1 like target digit. Our proposed Backpropagation Network consists of one input layer of 2500 neurons, three hidden layers (855-500-30) and one output layer of 10 neurons. The Fig. 7 shows our proposed Network: unrecognizing. Due to the use of very high threshold (0.98) on the test output the chance of appearing false recognizing result is reduced but the rate of showing unrecognizing result was going high. # VII. # CONCLUDING REMARKS This system works predictably and fairly reliably. A face recognition system must be able to recognize a face in many different imaging situations. The appearance of a face in a 2D image is not only influenced by its identity but by other variables such as brightness, dimension and direction of face in photo. We have tested our project for slightly different directions. The performance of our online system heavily depends on the learning phase. With the increase of the number of faces per person of varying different directions and illuminations in the training phase, the result of recognition appearing more accurate. Our future work includes developing a new algorithm which will eliminate the drawbacks of this system by enhancing the capacity of the face database. For this purpose Clustered Network can be implemented. 1![Figure 1 : Our Proposed System.](image-2.png "FFFigure 1 :") 2![Figure 2 : (A) Original Image Histogram. (B) SMQT Enhanced Image Histogram.](image-3.png "Figure 2 :") 34![Figure 3 : Diagram of feature extraction.](image-4.png "Figure 3 :Figure 4 :") ![Real-Time Face Recognition System Based On Morphological Gradient Features and ANNGlobal Journal of Researches in EngineeringVolume](image-5.png "") 56![Figure 5: (A) Original Image. (B) Eroded Image.](image-6.png "Figure 5 :Figure 6 :") 7![Figure 7 : Our Proposed Backpropagation Neural Network.](image-7.png "Figure 7 :") 8![Figure 8 : (A) ORL Database. (Olivetti Research Laboratory) (B) Locally Created Database (Automatically Cropped by Face Detection Phase).](image-8.png "Figure 8 :") © 2012 Global Journals Inc. (US) * RGonzalez RWoods Digital Image Processing Wiley and Sons Inc 1991 2nd Edition * CRafael RichardEGonzalez Woods LStevens Eddins Digital Image Processing using MATLAB * Introduction to Artificial Neural Networks SNSivanandam MRaj * Face Detection using Local SMQT Features and Split Up SNoW Classifier MNilsson JNordberg IClaesson IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) 2007 * Detecting faces in images: A survey M.-HYang DKriegman NAhuja IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) 24 1 2002 * Face detection: A survey EHjelmas BKLow Computer Vision and Image Understanding 3 3 2001 * Neural network-based face detection HRowley SBaluja TKanade Proceedings of Computer Vision and Pattern Recognition Computer Vision and Pattern Recognition June 1996 * Probabilistic modeling of local appearance and spatial relationships for object recognition HSchneiderman TKanade Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '98) the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '98) July 1998 * Rapid object detection using a boosted cascade of simple features PViola MJones Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR) 2001 1 * A snow-based face detector DRoth MYang NAhuja Advances in Neural Information Processing Systems 12 (NIPS 12) MIT Press 2000 * Face detection with the modified census transform BFroba AErnst Sixth IEEE International Conference on Automatic Face and Gesture Recognition May 2004 * Training support vector machines: an application to face detection EOsuna RFreund FGirosi Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '97 the IEEE Conference on Computer Vision and Pattern Recognition (CVPR '97 1997 * Morphological Shared-Weight Networks with Applications to Automatic Target Recognition YWon 1995 Electronics and Telecommunications Research Institute Daejon, South Korea * Morphological Neural Network Vision Processing for Mobile Robots DHaun KHummel MSkubic 1997 University of Missouri-Columbia