5% OFF Over $19, Code: Elephant5; 10% OFF Over $59, Code: Elephant10



5.3 Face tracking and Color tracking

Posted by Fiona Su on

1. Target location capture principle

1.1 Face recognition

The most basic task of face recognition is "Face Detection." You must first "capture" the face to recognize it in the future when compared to the captured new face. We only provide a method for face detection.

The most common face detection method is to use the "Haar Cascade Classifier".

Here we use a pre-training classifier. In OpenCV, static images and real-time video have similar operations on face detection. In general, video face detection only reads images of each frame from the camera, and then Detection is performed using a static image detection method.

Face detection first requires a classifier:

face_cascade=cv2.CascadeClassifier(‘123.xml’)

123.xml is Haar cascading data, this xml can be obtained from data/haarcascades in the OpenCV3 source code.

The actual face detection is then performed by the function 

face_cascade.detectMultiScale().

We can't pass each frame of the image captured by the camera directly into .detectMultiScale(). Instead, we should first convert the image to a grayscale image because face detection requires such a color space.

(Note: Be sure to enter the correct location for 123.xml correctly.)

1.2 color recognition

The principle of color recognition is to classify and mark the HSV color gamut space of each color in each frame image.

The first thing to do is pass the function:

cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)

The RGB is converted into hsv, and the mask is constructed according to the Threshold. After the morphological processing of the expansion corrosion, the mask and the original image are bitwise and operated. After the color is found, a circle is drawn on the outline of the color for labeling.

2. Target tracking algorithm implementation

2.1 face tracking

First, we need to import the relevant packages and then create the camera instance, motion control variables, PID controller instance, PTZ bus steering control instance and display controls that we need to use.

Code as shown below:

image.png 

We load the "Haar" "Cascade Classifier File "123.xml" for Face Detection:

Code as shown below:

image.png 

Then, we can enter the main process, after face recognition obtains the position of the current face, the PTZ controller will track the face through the PID controller.

Here we use the positional PID algorithm:

Code as shown below:

image.png 

The corresponding complete source code is located at:

/home/jetbot/Notebook/11.Face tracking/Face tracking.ipynb

2.2 Color tracking

In the implementation process of color tracking, except for the principle of target capture based on face tracking, the process of implementing the algorithm on the large target is generally consistent.

The difference is:

We can run the code in any of the following cells to set the color to be captured to the target color:

image.png 

In the main process of the PTZ shown in the figure below, if the recognition effect is not enough when the ambient light is sufficient, you can try to modify the program:

# Gaussian filtering (5, 5) means that the length and width of the Gaussian matrix are both 5 and the standard deviation is 0.

    frame_=cv2.GaussianBlur(frame,(5,5),0)                    

    hsv=cv2.cvtColor(frame,cv2.COLOR_BGR2HSV)

mask = cv2.inRange(hsv, color_lower, color_upper)

# Corrosion operation to remove edge hairs

mask = cv2.erode(mask, None, iterations=2)

# Expansion operation

mask = cv2.dilate(mask, None, iterations=2)

The parameters in the operation function are optimized.

image.png 

The corresponding complete source code is located at:

/home/jetbot/Notebook/12.Color tracking/Color tracking.ipynb

Tutorial

Buy Yahboom Jetbot AI robot with HD Camera Coding with Python for Jetson Nano

0 comments

Leave a comment

Please note, comments must be approved before they are published