5% OFF Over $19, Code: Elephant5; 10% OFF Over $59, Code: Elephant10



5.5 Object follow

Posted by Fiona Su on

1. Introduction to COCO dataset pre-training model

What is COCO?

COCO is a large-scale object detection, segmentation and caption data set. COCO has several characteristics:

  • Object segmentation
  • Recognition in the background
  • Super pixel division
  • 330K image (> 200K label)
  • 1.5 million object instances
  • 80 object categories
  • 91 things category
  • 5 subtitles per picture
  • 250,000 people have key points

To achieve object tracking, we used a pre-trained neural network trained on the COCO dataset (http://cocodataset.org, which requires scientific access to the web) to detect 90 different common objects.

For more details, please refer to: [Jetbot-AI Car] --> [Annex] --> [COCO_data.txt]

2. Implementation of motion algorithm

Shared processing algorithm:

Detection_center(detection)

Calculating the center x, y coordinate algorithm of the object, code as shown below.

image.png 

Norm(vec)

Calculate the length algorithm of a two-dimensional vector, code as shown below.

image.png 

Closest_detection(detections)

Find the detection algorithm closest to the center of the image, as shown below.

image.png 

①Advanced Optimization Edition Add Algorithm:

Because the basic version of the follow-up movement does not seem to be very satisfying, and then not very smooth when following, we add a new algorithm to control the following process.

②Follow the speed PID Adjustment Algorithm: This will make the Jetbot faster when we are far away from the Jetbot. When the distance is closer, the Jetbot will be slower until it reaches a certain distance from the target.

③Steering Gain PID Adjustment Algorithm: When the Jetbot deviates significantly from the direction of travel of the corresponding tracking target, the steering gain will be larger to speed up the direction calibration. When it is smaller, the steering gain will not be too large, making the motion look smoother.

image.png 

Camera vertical angle PID Adjustment Algorithm: When tracking non-human small object objects, the object may be lost in the vertical direction, so add a vertical camera angle here, let Jetbot automatically capture the field of view to the target object.

image.png 

3. Object following

We need to import the ``ObjectDetector`` class, which uses our pre-trained SSD engine. There are many other classes (you can check [this file]

(https://github.com/tensorflow/models/blob/master/research/object_detection/data/mscoco_complete_label_map.pbt)

Get a complete list of class indexes).

This model comes from [TensorFlow Object Detection API]

(https://github.com/tensorflow/models/tree/master/research/object_detection)

The API also provides utilities for object detector training for custom tasks. Once the model is trained, we use the NVIDIA TensorRT to optimize the Jetson Nano. This makes the network very fast and can control Jetbot in real time.

First, we need to import the "ObjectDetector" class, which uses our pre-trained SSD engine.

image.png 

Internally, the 'ObjectDetector' class uses the TensorRT Python API to execute the engine we provide. It is also responsible for pre-processing the input of the neural network and parsing the detected objects.

Currently, it only works with engines created with the ``jetbotssd_tensorrt`` package. This package has utilities for converting models from the TensorFlow object detection API to the optimized TensorRT engine. Next, let's initialize the camera. Our detector needs 300x300 pixels of input, so we will set this parameter when creating the camera.

(Note: The resolution must be 300 * 300, otherwise the objects in the model will not be recognized.)

image.png 

Next, let's use some camera input to execute our network. By default, the ``ObjectDetector`` class expects the camera to generate a format of ``bgr8``.

However, if the input format is different, you can override the default preprocessor function.

If there are any COCO objects in the camera's field of view, they should now be stored in the ``detections`` variable, and we print them out with the code shown below or with a text widget:

image.png 

image.png 

Print out the first object detected in the first image:

image.png 

# Control the robot to follow the center object

Now we want the robot to follow the object of the specified class. To do this, we need to do the following:

1. Detect objects that match the specified class

2. Select the object closest to the center of the camera's field of view. This is the target object.

3. Guide the robot to move to the target object, otherwise it will drift

We will also create widgets that control the target object label, robot speed and cornering gain, and control the speed of the robot's cornering based on the distance between the target object and the center of the robot's field of view.

image.png 

Create a robot instance of the drive motor, code as shown below:

image.png 

Finally, we need to display the widgets for all controls and connect the network execution function to the camera update. We set the tracked object by the value of label_widget.

For more details, please refer to: [Jetbot-AI Car] --> [Annex] --> [COCO_data.txt]

image.png

image.png 

Call the following code block to connect the execution function to each camera frame to update:

image.png 

If the robot is not blocked, you can see that the blue box surrounds the detected object and the target object (the object that the robot follows) will be displayed in green.

When the target is discovered, the robot should turn to the target.

You can call the following code block to manually disconnect the camera and stop the robot.

image.png 

The corresponding complete source code is located:

/home/jetbot/Notebook/14.Object follow-Basic/Object follow-Basic.ipynb

4. Object follow with Automatic avoid

We use the 【5.4 Automatic avoid】 training obstacle avoidance model and the object follow-up of this course, because there is no conflict in loading them at the same time, the processing speed has no effect, so we can put these two functions Used in combination, the difference from the basic version of the code is:

Load into the obstacle avoidance model:

image.png 

Execute the conflict model before executing the following code to determine whether to block. If it is blocked, turn left, then return directly skips the execution of the following following code to start the next loop.

image.png 

The corresponding complete source code is located:

/home/jetbot/Notebook/15.Object follow-Avoid/Object follow-Avoid.ipynb

5. Optimized object following

Advanced optimized object tracking, we have added a new program:

Following speed PID Adjustment Algorithm

Steering gain PID Adjustment Algorithm

Camera vertical angle PID Adjustment Algorithm

The difference between it and the basic version is:

Add the import PID driver module, create a PID controller instance, and initialize the corresponding control variable cell code:

image.png 

If the set tracking object is detected, the following three PID adjustment algorithms are added. Because when people follow people, the person is big enough for the Jetbot object, so there is no adjustment in the vertical angle direction of the gimbal. Others, we recommend adjusting the camera angle to the elevation angle when following the human body, expanding the scope of the Jetbot's field of view, following The effect will be better.

image.png

image.png 

The corresponding complete source code is located:

/home/jetbot/Notebook/16.Object follow-Optimized/Object follow-Optimized.ipynb

Tutorial

Buy Yahboom Jetbot AI robot with HD Camera Coding with Python for Jetson Nano

0 comments

Leave a comment

Please note, comments must be approved before they are published