Face, eye and mouth detection with Python

Parveen Kumar
6 min readJul 1, 2021

Are you curious to know about innovative projects using Python? Then you are in the right place to boost up your knowledge. In this article, I’m going to present the precise details of how to detect face, eyes and mouth in any static image using Computer Vision in Python. Here I’ll introduce you to a very effective and easy way of coding to make a simple face, eye and mouth detector. If you are a beginner in the field of Machine Learning and Computer Vision then you can start with this simple but smart project. So now to build up this project and understand the code first you have to know about computer vision and two popular libraries of Python- OpenCV and NumPy.

What is Computer Vision?

Computer Vision is a subset of Artificial Intelligence that trains computers to interpret and understand the visual world. Using digital images from cameras and videos and deep learning models, computers can accurately identify and classify object, derive meaningful information and then they take actions or make recommendations based on that information. Nowadays it’s being used in lots of many complex fields like –

· Defect detection

· Metrology

· Assembly verification

· Screen reader

· Automated media coverage

· Cancer detection

· Automatic harvesting etc.

What is OpenCV?

Open source Computer Vision is basically a library, used for real-time Computer Vision. It supports a wide range of programming languages like Python, C, C++, Java etc and also supports different platforms including Windows, Linux, MacOS. By using it, one can process images and videos to identify objects, faces or even handwriting of a human. When it integrated with various libraries, such as Numpy, Python is capable of processing the OpenCV array structure for analysis. To identify image pattern and its various features we use vector space and perform mathematical operations on these features.

What is NumPy?

Numerical Python (NumPy) is basically an open source Python library used for working with multidimensional arrays. It is the fundamental package for scientific computing with Python. It also has functions for working in domain of linear algebra, fourier transform and matrices. It is also faster than Python List.

In this project we’ll be using the Haar Cascade classifier to detect faces, eyes and mouth in an image. So, before diving into the deeper of the code let’s see the working principle of this classifier.

Haar Cascade Algorithm:

It is a machine learning algorithm used to detect objects in a static image or a real time video based on the edge and line detection features proposed by Paul Viola and Michael Jones in 2001. The algorithm is given a lot of positive images consisting of faces and a lot of negative images not consisting of any face to train the model. This pre-trained model is available at the OpenCV GitHub repository. The repository has some XML files where this model is stored. This includes models for face detection, eye detection, upper and lower body detection etc.

The algorithm contains 4 stages:

§ Haar Feature Selection

§ Creating Internal Images

§ Adaboost Training

§ Cascading Classifiers

Haar features:

Haar wavelet is a sequence of rescaled square-shaped functions and it is very similar to Fourier-analysis. This was first proposed by Alfred Haar in 1909. These features are very similar to convolutional kernels. Haar features are the relevant features for face, eye and mouth detection.

Ideal Haar feature
These are real values detected on an image

Viola-Jones algorithm will compare how close the real scenario is to the ideal case.

I. First sum up the white pixel intensities.

II. Then calculate the sum of black pixel intensities.

∆ for ideal Haar feature is 1

∆ for the real image is 0.74–0.18= 0.56

The closer the value to 1, the more likely we have found a Haar Feature (i.e. eyebrows, nose, lips, eyes etc)!! [We will never get 0 or 1 as there are thresholds.

How to code:

First, we have to install OpenCV library which can be easily installed by using the pip command: pip install opencv-python and then we will install the NumPy library by using the pip command: pip install numpy.

· To get started first we have to import the NumPy library named as np and OpenCV library named as cv2.

· #This project detects faces and eyes in any image
#import the necessary libraries

import numpy as np
import cv2

· OpenCV library has so many pre-trained classifiers for face, eyes, smile etc. We can find these required XML files in Github. Now we will download those pre-trained face, eye and mouth detection model and save the files in the current directory. Now, include the haarcascade frontalface, haarcascade mouth and haarcascade eye XML files in the program.

#load the xml files for face, eye and mouth detection into the program
face_cascade=cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml')
mouth_cascade = cv2.CascadeClassifier('haarcascade_mcs_mouth.xml')

· Now we will read the image in RGB format by using imread function for further modification.

#read the image for furthur editing
image = cv2.imread('big bang final.jpeg')

· You can use imshow function to show the original image before detecting face and eyes in it as an output and include a delay of 100 milliseconds by waitKey(100) [Optional].

#show the original image
cv2.imshow('Original image', image)
cv2.waitKey(100)

· Convert the image into grayscale by the cvtColor function and we have to pass 2 parameters here- the variable where the image is stored after reading(i.e. image) and cv2.COLOR_BGR2GRAY.

#convert the RBG image to gray scale image
gray_image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)

· Now, we’ve to make the program identify faces using haar based classifiers and we will use the detectMultiScale function where we’ll pass 3 arguments — the input image, scaleFactor and minNeighbours.

·  #identify the face using haar-based classifiers
faces = face_cascade.detectMultiScale(image, 1.4, 4)

· Now, iterate through the faces array and draw a rectangle. [In the code, ROI stands for Region Of Interest. roi_gray = gray_image[y:y+h, x:x+w]- this line indicates that it basically selects row starting with y till y+h and column starting with x till x+w. This works same way with roi_color.]

#iteration through the faces array and draw a rectangle
for(x, y, w, h) in faces:
cv2.rectangle(image, (x, y), (x+w, y+h), (0, 0, 255), 2)
roi_gray = gray_image[y:y+h, x:x+w]
roi_color = image[y:y+h, x:x+w]

· Now again identify the eyes and mouth with detectMultiScale function.

·   #identify the eyes and mouth using haar-based classifiers
eyes = eye_cascade.detectMultiScale(gray_image, 1.3, 5)
mouth = mouth_cascade.detectMultiScale(gray_image, 1.5, 11)

· Iterate through the eyes and mouth array and draw rectangles.

·#iteration through the eyes and mouth array and draw a rectangl
for(ex, ey, ew, eh) in eyes:
cv2.rectangle(image,(ex, ey), (ex+ew, ey+eh), (0, 255, 0), 2)
for(mx, my, mw, mh) in mouth:
cv2.rectangle(image, (mx, my), (mx+mw, my+mh), (255, 0, 0), 2)

· Now, to show the final image after detecting face, eyes and mouth we will again use imshow function and will also give an infinite delay by using the waitKey() function.

#show the final image after detection
cv2.imshow('face, eyes and mouth detected image', image)
cv2.waitKey()

· At last, show a successful message to the user by the print function.

#show a successful message to the user
print("Face, eye and mouth detection is successful")

Original Image:

The output after face, eyes and mouth detection:

After applying haar cascade

--

--