5% OFF Over $19, Code: Elephant5; 10% OFF Over $59, Code: Elephant10



5.6 Autopilot

Posted by Fiona Su on

!!!Note:

①You need to use our track, The track is not included in the car kit, please purchase separately.

②If you want to use own track, you need to retrain the model and adjust the parameters in the program according to the actual situation.

Part1- Using our track:

First, you need to stop the self-starting APP program. Input following command:

sudo systemctl stop jetbot_ start

image.png 

Then, we need to put robot car to track and run Autopilot program.

image.png 

If you want to use own track, you need to retrain the model and adjust the parameters in the program according to the actual situation.

1. Collect data through Jetbot

image.png 

If you have already browsed the collision avoidance example, you should be familiar with the following three steps.

1). Data collection

2). Training

3). Deployment

In this case, we will do the same thing. However, in addition to classification, you will learn another basic technique, 'regression'-regression, which we will use to enable Jetbot to follow a path (actually any path or target point).

1). Place the Jetbot at different locations on the path (offset from the center, different angles, etc.).

2). Display the live camera input from Robot 3.

3). Using the gamepad controller, place a “green dot” on the image that corresponds to the target direction we want the robot to move.

4). Store the X, Y values of this green dot along with the image of the robot camera.

Then, in the training notebook, we will train a neural network to predict the X, Y values of our labels. In the live demo, we will use the predicted X, Y values to calculate an approximate steering value (it is not an "exact" angle, because this requires image calibration, but it is roughly proportional to the angle, so our controller Will work normally).

So, how to determine the target position of this example?

Here are some guidelines that we think might be helpful:

1). Look at the live video of the camera.

2). Imagine the path that the robot should follow (try to get close to the distance it needs to avoid running off the road, etc.).

3). Place the target as far as possible so that the robot can rush directly to the target without “running away” from the road.

For example, if we are on a very straight road, we can put it on the horizon. If we are making a sharp turn, it may need to be placed closer to the robot so that it does not run out of the border.

Assuming our deep learning model works as expected, these markup guidelines should ensure the following:

1). The robot can move directly to the target safely (not out of bounds, etc.)

2). The goal will continue to move along the path we imagined

What we got was a carrot on the big road. The carrot moved along the trajectory we wanted. Deep learning decided where to put the carrot, and Jetbot just followed it.

We start by importing all the libraries needed for "data collection." We will primarily use OpenCV to visualize and save images with labels.

Uuid, datetime and other libraries for image naming:

image.png 

Our neural network takes an image of 224x224 pixels as input. We set the camera to this parameter to minimize the file size of the dataset (we have tested it for this task).

In some scenarios, it's best to collect the data at a larger image size and then reduce it to the desired size.

image.png 

After running the above code, the following interface will be displayed below the upper cell:

image.png 

This step is similar to the "handle remote" task. In this example, we will use the gamepad controller to mark the image.

First, we have to do is create an instance of the Controller widget, which we will use to mark the image with "x" and "y" values, as described in the introduction. The Controller widget accepts an index parameter that specifies the number of controllers. This is useful if you have multiple controllers, or if some gamepads appear as multiple controllers. To determine the index of the controller we are using, then before we create the handle instance we will follow the steps we have just learned to use the remote control handle:

1). Visit http://html5gamepad.com

2). Press the button on the gamepad you are using

3). Remember the index of the gamepad that responds to the button

Then, we will use this index to create and display the controller.

image.png 

Next, we connect the Gamepad controller to the label image.

image.png 

The code below will display the live image feed and the number of images we saved.

We store the value of the target X, Y:

1). Put the green dot on the target

2). Press the 13th button to save

Then the data we want will be saved to the ``dataset_xy`` folder. The saved file naming format is:

``xy_<x value>_<y value>_<uuid>.jpg``

When we train, we load the image and parse the x and y values in the file name.

Code shown below:

image.png 

For ease to using, we can open a new thread to control the Jetbot robot car by handled to collect data through the handle:

image.png 

Create a method to adjust the position of the Jetbot to the autopilot angle and call this method to adjust.

image.png 

Collect data as much as possible in accordance with the method I mentioned above, otherwise it may lead to inaccurate recognition when driving automatically.

The corresponding complete source code is located:

/home/jetbot/Notebook/17.Autopilot-Basic/Data collection.ipynb

2. Train the neural network model

We will train a neural network to get an input image and output a set of x, y values corresponding to a target.

We will use the PyTorch deep learning framework we used in the previous course to train the ResNet18 neural network structure model to identify road conditions for automatic driving.

First, we need to import all the required data packets:

image.png 

We create a custom 'torch.utils.data.Dataset` database instance that implements the ``__len__`` and ``__getitem__`` functions.

This class is responsible for loading the image and parsing the x and y values in the image file name.

Since we implemented the ``torch.utils.data.Dataset`` class, we can use all of the torch data utilities, and we hardcoded some conversions (such as color jitter) in the dataset.

We set the random horizontal flip to optional (if you want to follow an asymmetrical path, such as a road), it doesn't matter if Jetbot follows a certain convention, you can enable flips to augment the dataset.

image.png 

Split the data set into a training set and a test set that will be used to verify the accuracy of the model we are training:

image.png 

We use the ``DataLoader`` class to load data in bulk, shuffle data, and allow multiple child processes to be used.

In this example, we use a data batch size of 64. The batch size will be based on the memory available to the GPU, which can affect the accuracy of the model.

Run the following cell code to create the training set data loader and test set data loader:

image.png 

The ResNet-18 model we use is based on PyTorch TorchVision. In the process of “migration learning”, also called “transfer learning”, we can reuse a pre-trained model (training millions of images) for one possible A new task with much less data available.

For more information, please visit ResNet-18:

https://github.com/pytorch/vision/blob/master/torchvision/models/resnet.py

More details about transfer learning (requires science online):

https://www.youtube.com/watch?v=yofjFQddwHE

And transfer it to the GPU via "CUDA":

image.png 

Then, we can train the regression model we need to use. Here I set the value of NUM_EPOCHS to 50, that is, we trained 50 times. If there is a loss reduction situation, we will save the best model:

image.png 

Once the model is trained, it will generate a ``best_steering_model_xy.pth`` file, which we will use in the autopilot routine for reasoning.

The corresponding complete source code is located:

/home/jetbot/Notebook/17.Autopilot-Basic/train model.ipynb

3. Implementation of motion algorithm

Here we did not use our own PID driver, but a proportional/differential control (PD) controller used to meet our requirements:

image.png

image.png

(Base Speed)speed_gain_slider

(P)steering_gain_slider

(D)steering_dgain_slider

(base steering value) steering_bias_slider

We can adjust these four values through the slider to get our Jetbot to drive to the best condition.

4. Autopilot on the track with trained neural network model

First, we need to load the Resnet-18 neural network that has been used many times in this previous course:

image.png 

Then, import the package we need to use and create the relevant instance. To facilitate debugging, I added the line that used the handle to move the Jetbot.

! Note: Turn on the automatic driving and run the corresponding code to close the process of the handle control. The control effect on Jetbot is contradictory, so they are can't run at the same time!

image.png 

Next, load the trained model ``best_steering_model_xy.pth`` and transfer it to the GPU for calculation:

image.png 

After the above code is executed, we have loaded the model, but there is a small problem.

In order for the format of our training model to exactly match the format of the camera, we need to do some preprocessing.

Proceed as follows:

1). Convert from HWC layout to CHW layout

2). Normalize using the same parameters as we did during training (our camera provides values in the range [0, 255], and the training loaded image is in the range [0, 1], so we need to scale 255.0

3). Transfer data from CPU memory to GPU memory

4). Add a batch dimension

image.png 

Then, the camera's screen is displayed in real time, and the angle of the Jetbot is adjusted to the angle of the automatic driving.

image.png 

Create an instance robot that controls the Jetbot motion.

image.png 

Now, we will define the slider to control the Jetbot.

(Tips: We have configured initial values for the sliders. These initial values apply to our official Yahboom map, but if you are training on your own different road maps, these values may not apply to your dataset, so Please increase or decrease the slider according to your settings and environment)

1) Speed control (speed_gain_slider): To start Jetbot, add ``speed_gain_slider``

2) Steering gain control (steering_gain_sloder): If you see that Jetbot is spinning, you need to reduce ``steering_gain_slider`` until it becomes smooth.

3)Steering bias control (steering_bias_slider): If you see Jetbot leaning towards the far right or extreme left of the track, you should control this slider until Jetbot starts tracking the line or track at the center.

(Note: When you slide the related slider mentioned above, you should not move the slider value very quickly to get a smooth Jetbot road following behavior. You should adjust the motion parameter by moving the slider value gently.)

image.png 

The x and y sliders will display the predicted x, y values. The steering slider will display our estimated steering value. This value is not the actual angle of the target, but an almost proportional value.

When the actual angle is ``0``, this is 0, which will increase/decrease as the actual angle increases/decreases:

image.png 

Next, we'll create a function that will be called when the camera's value changes. This function will perform the following steps:

1) Preprocess camera image

2) Perform a neural network

3) Calculate the approximate steering value

4) Control the motor using proportional/differential control (PD)

image.png 

We have created a neural network execution function, but now we need to attach it to the camera for processing.

(Tips: This code will make the robot move!! Please place the Jetbot robot on the map you have trained before. If the data you collected and the model are well trained, you will see Jetbot running smoothly on the road. )

image.png 

If your Jetbot functions properly, it will generate new commands for each new camera frame. Now you can place the Jetbot on a track that has collected data and see if it can track the track. If you want to stop this behavior, you can unload the binding of this callback function by executing the code in the following cell:

image.png 

You can run the following code to open the thread with the handle to remotely control Jetbot.

! Note: When using the handle, please run the code in the above cell to stop the automatic driving function of Jetbot.

image.png 

The corresponding complete source code is located:

/home/jetbot/Notebook/17.Autopilot-Basic/Autopilot-Basic.ipynb

5. Auto-driving pedestrians (multi-object optional) detect parking

Based on the Autopilot-Basic Edition, we tried to port the object detection function in the object following example:

Load the object detection model and add related algorithm methods. The code is as shown below:

image.png 

This code is a bit different from the real-time display camera code, because the data is processed to return the image to the display component display, so the traitles dLink camera value and display component values are not used in the code that displays the image.

image.png 

Use the following code to detect, object detection, detected objects:

image.png 

Then, we implement the detection of the object by adding the following code to the autopilot-based version of the motion control code. 

If we detect the object corresponding to the label we set, we will let Jetbot stop and restart the automatic driving when the object no longer appears in the field of view:

image.png 

The corresponding complete source code is located:

/home/jetbot/Notebook/18.Autopilot Pedestrian detects parking/Autopilot Pedestrian detects parking.ipynb

Tutorial

Buy Yahboom Jetbot AI robot with HD Camera Coding with Python for Jetson Nano

0 comments

Leave a comment

Please note, comments must be approved before they are published