Abstract:Visual navigation has the advantages of low cost, wide applicability and high degree of intelligence, so it is widely used in orchard navigation tasks. Therefore, how to quickly and accurately identify the navigation path is a key step to achieve visual navigation. Aiming at the problems of multiple interference factors and complex image background in the application of visual navigation system in dragon orchard environment, a visual navigation path recognition method was proposed for dragon orchard based on improved DeepLabv3+ network. Firstly, the traditional DeepLabv3+ backbone feature extraction network was replaced by MobileNetV2 from Xception, and the atrous convolution in atrous spatial pyramid pooling (ASPP) was replaced with depthwise separable convolution(DSC). While improving the model detection rate, the number and memory footprint of model parameters were greatly reduced. Secondly, coordinate attention (CA) was introduced at the feature extraction module, which was helpful for the model to locate and identify road areas. Then, experiments were conducted on a self-built dragon orchard road dataset containing three different road conditions. The results showed that compared with the traditional DeepLabv3+, the MIoU and MPA of the improved DeepLabv3+ were increased by 0.79 percentage points and 0.41 percentage points, respectively, reaching 95.80% and 97.86%. Frames per second (FPS) was increased to 57.89f/s, and the number of parameters and memory footprint were reduced by 92.92% and 97.00%, respectively, to 3.87×106 and 15.0MB. The recognition results of the improved model on the orchard road were verified on the test set, indicating that the model had good robustness and anti-interference. In addition, comparing the proposed model with Pspnet and U-net networks, the results showed that the improved models offered significant advantages in detection rate, amount of parameters, and model size, making them more suitable for deployment to embedded devices. According to the segmentation results of the model, the edge information on both sides of the road was extracted, the road boundary line was fitted by the least squares method, and finally the navigation path was extracted by the angle bisector line fitting algorithm. The navigation path recognition accuracy was tested in three different road environments, and the test results showed that the average pixel error was 22 pixels and the average distance error was 7.58cm. The road width of the orchard in this test was 3m, and the average distance error accounted for only 2.53%. Therefore, the research result can provide an effective reference for the visual navigation task of dragon orchard.