[go: up one dir, main page]

0% found this document useful (0 votes)
83 views15 pages

Face Detection App

This document discusses developing a face detection app using local binary patterns (LBP) and image processing techniques. It aims to improve face recognition accuracy for automatic attendance systems. The proposed system applies techniques like histogram equalization and bilateral filtering to captured face images before recognizing faces using LBP and comparing to trained images. This approach seeks to address challenges like varying lighting and improve over traditional LBP-based recognition systems. An evaluation found the method to be accurate, reliable and suitable for practical attendance management applications.

Uploaded by

rithikashivani06
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
83 views15 pages

Face Detection App

This document discusses developing a face detection app using local binary patterns (LBP) and image processing techniques. It aims to improve face recognition accuracy for automatic attendance systems. The proposed system applies techniques like histogram equalization and bilateral filtering to captured face images before recognizing faces using LBP and comparing to trained images. This approach seeks to address challenges like varying lighting and improve over traditional LBP-based recognition systems. An evaluation found the method to be accurate, reliable and suitable for practical attendance management applications.

Uploaded by

rithikashivani06
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 15

FACE DETECTION APP

ABSTRACT:

Abstract

Face Recognition is a computer application that is capable of detecting, tracking, identifying or verifying
human faces from an image or video captured using a digital camera. Although lot of progress has been
made in domain of face detection and recognition for security, identification and attendance purpose, but
still there are issues hindering the progress to reach or surpass human level accuracy. These issues are
variations in human facial appearance such as; varying lighting condition, noise in face images, scale,
pose etc. This research paper presents a new method using Local Binary Pattern (LBP) algorithm
combined with advanced image processing techniques such as Contrast Adjustment, Bilateral Filter,
Histogram Equalization and Image Blending to address some of the issues hampering face recognition
accuracy so as to improve the LBP codes, thus improve the accuracy of the overall face recognition
system. Our experiment results show that our method is very accurate, reliable and robust for face
recognition system that can be practically implemented in real-life environment as an automatic
attendance management system.

The human face is a sophisticated multidimensional structure that can convey a lot of information about
the individual, including expression, feeling, facial features. Effectively and efficiently analyzing the
features related to facial information is a challenging task that requires lot of time and efforts. Recently,
many facial recognition-based algorithms for automatic attendance management has been proposed,
successfully implemented and used as in Refs. [[1], [2], [3], [4]] and also new algorithms developed or
some existing algorithms improved or combined with other methods, techniques, or algorithms to build
facial recognition systems or applications as in Refs. [[5], [6], [7], [8]].

Although lot of achievements have been made in devising facial recognition algorithms and systems, but
to reach human level accuracy of facial recognition, some major issues associated with these
algorithms/systems should be greatly mitigated or addressed as argued in Ref. [9] so as to realize a
reliable and accurate facial recognition-based automatic attendance management system, which can be
very useful in the area of substantiation.

The main challenges for successful face detection and recognition systems are; illumination conditions,
scale, occlusion, pose, background, expression etc., as highlighted in Refs. [10,11]. Various algorithms
and methods have been proposed to address these challenges; N.Pattabhi Ramaiah Ref. [12] uses
illumination Invariant Face Recognition using Convolutional Neural Networks to address illumination
conditions, Abass et al Ref. [13] addresses the issues of shift and rotation using complex wavelet
transform (CWT) and Fisherface. To address issues related to pose, Kishor et al Ref. [14] proposes robust
pose invariant face recognition using Dual Cross Pattern (DCP), LBP and Support Vector Machine
(SVM).

In our research work, which is divided into two main sections: The first section focused mainly on
improving the face recognition algorithm while the second section focused on the attendance management
system based on the recognized human faces. In the first section, a digital live camera will be used at the
entrance to capture images of staffs entering an office or a building, which some advanced image
processing techniques, such as contrast adjustment, noise reduction using bilateral filter, image histogram
equalization, are applied to the captured images to improve their quality, then the Haar Algorithm will be
applied to the captured images to detect individual faces, which will be used as an input to the Face
Recognition System.

And also the same advanced image processing techniques above, plus Image Blending technique will be
applied, a prior, to the training/template face images, then the improved input images will be compared
with the improved training images using the LBP algorithm, to yield an improved LBP codes to recognize
faces, thus the facial recognition accuracy will be improved compared to the traditional LBP codes
without our method. In the second section, the metadata of the recognized facial images such as date and
time are automatically extracted to automatically mark attendance of everyone.

EXISTING SYSTEM:

Face detection and recognition generic framework

The first step for face recognition is face detection or can generally be regarded as face localization. It is
to identify and localize the face. Face detection technology is imperative in order to support applications
such as automatic lip reading, facial expression recognition and face recognition [6][7]. The framework
for both face detection and recognition is almost similar. In this paper, a generic framework is taken as an
instance from the research done by Shang-Hung Lin [8]. Basically, this framework consists of two
functional segments, which are a face image detector and a face recognizer. The face image detector
searches for human faces from the image and localizes the faces from the background. After a face has
been detected or localized, the process of recognition will take place to determine who the persons are by
the face recognizer. For both face detection and recognition, they have a feature extractor and a pattern
recognizer. The feature extractor transforms the pixels of the images into vector representation.

Challenges in face detection and recognition


Detecting and recognizing faces are challenging as faces have a wide variability in poses, shapes, sizes
and texture. The problems or challenges in face detection and recognition are listed as follow [9]:

 Pose

o A face can vary depends on the position of the camera during the image is captured.

 Presence of structural components

o There may be another additional components on the face such as spectacles, moustache or beard.

o These components may have different types, shapes, colours and textures.

 Facial expression

o The facial expression resembles directly on the person’s face.

 Occlusion

o A face may be partially obstructed by someone else or something when the image is captured among
crowds.

 Image orientation

o It involves with the variation in rotation of the camera’s optical axis.

 Imaging condition

o The condition of an image depends on the lighting and camera characteristics. There are other
challenges (which are not discussed in this paper) in face detection and recognition but these are the most
general Problems.

PROPOSED SYSTEM:

Systems design is a process that defines architecture, components, modules, interfaces, and data
requirements.System design can be viewed as a system theory application for product development. The
face detection technology that helps locate human face in digital images and video frames. The object
detection technology that deals with detecting instances of objects in digital image and videos. The
proposed automated recognition system can be divided into five main modules: Image Capture A camera
is placed away from the entrance to capture an image of the front of the student. And a further process
goes for face detection.
Face Detection and Facial Features The appropriate and effective facial detection algorithm constantly
improves facial recognition. Several facial algorithms such as face-to-face geometry, construction
methods, Face geometry-based methods, Feature Invariant methods, System Diagram Machine learning
based methods. Out of all these methods Viola and Jones proposed a framework that gives a high
detection rate and is also fast. Viola-Jones detection algorithm is fast and robust. So we chose Viola-Jones
face detection algorithm, which uses Integral Image and AdaBoost learning algorithm as classier. We have
observed that this algorithm yields better results in a variety of lighting conditions. Pre-Processing
Extracting the face features it is called pre-processing. This pre-processes step involves specifying the
extracted facial image and transforms to 100x100. Histogram Equalization is the most commonly used
Histogram Normalization technique. This improves the contrast of the image as it extends beyond the
intensity of the image, making it even more clear and constraint.Database Development As we choose
biometric based system every individual is required. This database development phase consists of an
image capture of each individual and extracting the biometric feature, and then it is enhanced using
preprocessing techniques and stored in the database.Post-Processing In the proposed system, after
recognizing the faces of the person, the names are show into a video output. The result is generated by
exporting mechanism present in the database system. These generated records can be seen in real time
video. This ensures that person whose faces are not recognized correctly by the system have to check in
database. And thus, giving them the ability to correct the system and make it more stable and accurate.

PROJECT DESCRIPTION:

The most basic task on Face Recognition is of course, “Face Detecting”. Before anything, you must
“capture” a face (Phase 1) in order to recognize it, when compared with a new face captured on future
(Phase 3).

The most common way to detect a face (or any objects), is using the “Haar Cascade classifier”

Object Detection using Haar feature-based cascade classifiers is an effective object detection method
proposed by Paul Viola and Michael Jones in their paper, “Rapid Object Detection using a Boosted
Cascade of Simple Features” in 2001. It is a machine learning based approach where a cascade function is
trained from a lot of positive and negative images. It is then used to detect objects in other images.

Here we will work with face detection. Initially, the algorithm needs a lot of positive images (images of
faces) and negative images (images without faces) to train the classifier. Then we need to extract features
from it. The good news is that OpenCV comes with a trainer as well as a detector. If you want to train
your own classifier for any object like car, planes etc. you can use OpenCV to create one. Its full details
are given here: Cascade Classifier Training.
If you do not want to create your own classifier, OpenCV already contains many pre-trained classifiers
for face, eyes, smile, etc. Those XML files can be download from haarcascades directory.

Enough theory, let’s create a face detector with OpenCV!

Download the file: faceDetection.py from my GitHub.

import numpy as np
import cv2faceCascade = cv2.CascadeClassifier('Cascades/haarcascade_frontalface_default.xml')
cap = cv2.VideoCapture(0)
cap.set(3,640) # set Width
cap.set(4,480) # set Heightwhile True:
ret, img = cap.read()
img = cv2.flip(img, -1)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.2,
minNeighbors=5,
minSize=(20, 20)
)
for (x,y,w,h) in faces:
cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w]
cv2.imshow('video',img)
k = cv2.waitKey(30) & 0xff
if k == 27: # press 'ESC' to quit
breakcap.release()
cv2.destroyAllWindows()

Believe it or not, the above few lines of code are all you need to detect a face, using Python and OpenCV.

When you compare with the last code used to test the camera, you will realize that few parts were added
to it. Note the line below:

faceCascade = cv2.CascadeClassifier('Cascades/haarcascade_frontalface_default.xml')
This is the line that loads the “classifier” (that must be in a directory named “Cascades/”, under your
project directory).

Then, we will set our camera and inside the loop, load our input video in grayscale mode (same we saw
before).

Now we must call our classifier function, passing it some very important parameters, as scale factor,
number of neighbors and minimum size of the detected face.

faces = faceCascade.detectMultiScale(
gray,
scaleFactor=1.2,
minNeighbors=5,
minSize=(20, 20)
)

Where,

 gray is the input grayscale image.


 scaleFactor is the parameter specifying how much the image size is reduced at each image scale. It is
used to create the scale pyramid.
 minNeighbors is a parameter specifying how many neighbors each candidate rectangle should have,
to retain it. A higher number gives lower false positives.
 minSize is the minimum rectangle size to be considered a face.
The function will detect faces on the image. Next, we must “mark” the faces in the image, using, for
example, a blue rectangle. This is done with this portion of the code:

for (x,y,w,h) in faces:


cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w]

If faces are found, it returns the positions of detected faces as a rectangle with the left up corner (x,y) and
having “w” as its Width and “h” as its Height ==> (x,y,w,h). Please see the picture.

Once we get these locations, we can create an “ROI” (drawn rectangle) for the face and present the result
with imshow() function.
Run the above python Script on your python environment, using the Rpi Terminal:

python faceDetection.py

The result:

You can also include classifiers for “eyes detection” or even “smile detection”. On those cases, you will
include the classifier function and rectangle draw inside the face loop, because would be no sense to
detect an eye or a smile outside of a face.

Note that on a Pi, having several classifiers at same code will slow the processing, once this method of
detection (HaarCascades) uses a great amount of computational power. On a desktop, it is easer to run it.

Examples

On my GitHub you will find other examples:

 faceEyeDetection.py
 faceSmileDetection.py
 faceSmileEyeDetection.py
And in the picture, you can see the result.

MODULES:

Identification of the person by the image of the face is the most demanded technology of artificial
intelligence in the field of security systems. The face recognition module built into VideoNet PSIM can
be used both for solving traditional tasks, such as controlling access to a site or detecting offenders, as
well as for implementing unique and individual solutions thanks to the PSIM concept.

The use of face recognition module:

 Identification of the person (comparison of the person with the base of employees, regular customers, criminals, etc.)

 Access control to protected sites. Identity verification for granting access

 Search for a person in the video archive of the video surveillance system

Algorithm of the face recognition module:


The face recognition module in VideoNet PSIM in real-time automatically selects the face image that is
optimal for recognition, stores and recognizes it, comparing it with the reference images in existing
databases. For example, it gives commands: a warning to the police or security services about the
appearance of offenders in a protected area, sends a message to the manager about the arrival of a VIP
client, gives a command to controllers of actuators when using a face as an identifier of an access control
system.

Features of the face recognition module:

 Automatic real-time face detection

 Real-time comparing of faces from a video stream with a face database

 Perform actions on the result of recognition: allow or deny access to the site, call / inform the security service, issue an alarm, other, previously regulated actions

 Face search in the archive by specified parameters: photo, age, gender, time and date

Convenience and automatization of the operator allows you to:

 See the result of the comparison of faces detected in real time with stored face databases

 Generate new databases based on detected and recognized faces with saving of information on the place and time of appearance of the person, links to the video clip

in the archive

 Manually create a face database for access to the site or premises

 Search for people in the archive by specified parameters: photos, age, gender, time and date, emotions

 Receive notifications for face recognition results

 Automatically inform the security service about the fact that a certain person has appeared in the control zone or about the result of searching for the desired face in

the video archive

 Form reports

 Automatically count faces

 Generate alarm events when face is substituted by a photo

Person classification by face image:


Using the image of a person, following characteristics of a person can be determined: gender, age,
emotions (joy, sadness, etc.). The definition of human characteristics is called classification. The
classification of a person by the face image is used in many areas:
 Analysis of the age structure of the audience, for example of a store or a restaurant

 Analysis of the gender composition of the audience

 Analysis of customer service quality

 Search for people in the video archive by photo, gender, age, emotions

Person classification by face image:


Using the image of a person, following characteristics of a person can be determined: gender, age,
emotions (joy, sadness, etc.). The definition of human characteristics is called classification. The
classification of a person by the face image is used in many areas:

 Analysis of the age structure of the audience, for example of a store or a restaurant

 Analysis of the gender composition of the audience

 Analysis of customer service quality

 Search for people in the video archive by photo, gender, age, emotions

MODULES SCREENSHOTS:
CONCLUSION:

In conclusion, in our research, after preprocessing the input face images using some advanced image
processing techniques such Contrast Adjustment, Bilateral Filter, Histogram Equalization, so as to have
better image features and the same advanced image processing techniques will be applied to the
training/template face images plus an image blending method to ensure high quality training/template
face images. The preprocessed input face image will be divided into k 2 regions, then the LBP code will be
calculated for every pixel in a region of the input face image by comparing the center with the
surrounding pixel. If the surrounding pixel is greater than or equal to the center pixel, then it is denoted as
binary 1, else it is denoted as 0.

This process will be repeated for each and every pixel of all other regions, to get the binary pattern so as
to construct the feature vector of the input face images. For every region, a histogram with all possible
labels is constructed. These constructed histograms with all its bins represent a pattern and contain the
number of its appearance in the region. The feature vector formed is then constructed by concatenating
the regional histograms to one big histogram, which is unique for each individual, and is compared with
the template face images to recognize faces. This method improves the LBP code and our experiment
results show that our method is very accurate and robust for facial recognition system that can be
implemented in a real-life environment. It is also important to state that our research does not address the
issue of occlusion and mask faces in facial recognition, but addressing these issues could be a perfect
future work of this paper.
Five different methods in face detection and recognition have been reviewed, namely, PCA, LDA, Skin
Colour, Wavelet and Artificial Neural Network. There are four parameters that are taken into account in
this review, which are size and types of database, illumination tolerance, facial expressions variations and
pose variations. From this independent review, please note that the results atypical and variant as they
correspond to different experiments or studies done by previous researchers. Thus, no specific
justification can be made as a conclusion on which algorithm is the best for specific tasks or challenge
such as various databases, various poses, illumination tolerance and facial expressions variations. The
performance of the algorithms depends on numerous factors to be taken into account. Instead of using
these algorithms solely, they can be improved or enhanced to become a new method or hybrid method
that yields a better performance.

REFERENCE:

[1] Deshpande, N. T., & Ravishankar, S. (2017). Face Detection and Recognition using Viola-Jones
algorithm and Fusion of PCA and ANN. Advances in Computational Sciences and Technology, 10(5),
1173- 1189.

[2] Kavia, M. Manjeet Kaur, (2016). “A Survey paper for Face Recognition Technologies”. International
Journal of Scientific and Research Publications, 6(7)

. [3] Ohol, M. R. M., & Ohol, M. S. R. PCA Algorithm for Human Face Recognition.

[4] Kasar, M. M., Bhattacharyya, D., & Kim, T. H. (2016). Face recognition using neural network: a
review. International Journal of Security and Its Applications, 10(3), 81-100.

[5] Mikhaylov, D., Samoylov, A., Minin, P., & Egorov, A. (2014, November). Face Detection and
Tracking from Image and Statistics Gathering. In Signal-Image Technology and InternetBased Systems
(SITIS), 2014 Tenth International Conference on (pp. 37-42).

[6] Liu, Z., & Wang, Y. (2000). Face detection and tracking in video using dynamic programming. In
Image Processing, 2000. Proceedings. 2000 International Conference on (Vol. 1, pp. 53-56). IEEE.

[7] IEEE ,Boda, R., & Priyadarsini, M. J. P. (2016). Face Detection and Tracking Using KLT And Viola
Jones. ARPN Journal of Engineering and Applied Sciences, 11(23), 13472-1347.

[8] M. Turk and A. Pentland, Eigenfaces for recognition, Journal of Cognitive Neuroscience, 3(1), pp.
7186, 1991.

[9] H. Lu, K. N. Plataniotis, and A. N. Venetsanopoulos, Mpca: Multilinear principal component analysis
of tensor objects, IEEE Trans. on Neural Networks, 19(1):1839,2008.

[10] Harguess, J., Aggarwal, J.K., A case for the average- half-face in 2D and 3D for facerecognition,
IEEE Computer Society.

You might also like