Date of Award

8-2024

Document Type

Dissertation

Degree Name

Doctor of Philosophy (PhD)

Department

Automotive Engineering

Committee Chair/Advisor

Rahul Rai

Committee Member

Yunyi Jia

Committee Member

Venkat Krovi

Committee Member

Bing Li

Abstract

Autonomous tractors equipped with intelligent sprayers have become a pivotal aspect of smart farming (SF), marking a transformative shift in traditional agricultural practices and holding the potential to revolutionize the farming industry. With 2,453,620 fruit-bearing acres in the United States as of 2022, there is a pressing need for the implementation of autonomous systems for farm tractors and intelligent spraying systems in orchards. These advancements can significantly reduce labor costs, address labor shortages, and minimize spray loss. Furthermore, to enhance profitability and productivity, it is essential to develop low-cost yet effective vision-based autonomy systems that can operate efficiently across various seasons and weather conditions.

The autonomous tractors or retrofitting kits that are currently being developed are capable of executing autonomous driving functions, including obstacle detection, simultaneous localization and mapping (SLAM), path planning, and motion control. They incorporate a series of expensive sensors, such as multiple high-resolution 3D LiDAR and high-accurate real-time kinematics (RTK) systems. Nevertheless, the widespread adoption of autonomous tractors is hindered by their high cost, as farms often prioritize expenditures and may find these technologies prohibitively expensive. This dissertation aims to explore affordable and adaptable hardware and software solutions for traditional agricultural machines. To this end, a prototype autonomous tractor and intelligent sprayer were developed using the John Deere X390 lawnmower, which serves as a suitable miniature version of a full-sized farm tractor and can be readily equipped with control actuators and hardware. The prototype incorporates two adaptable pedal robots and a steering wheel robot that can be seamlessly integrated with conventional tractors lacking X-by-wire technology. Furthermore, the distributed and module-based system architecture of the prototype facilitates effortless extensions for a variety of agricultural applications, ensuring versatility and scalability. This approach not only addresses the cost barriers but also provides a practical and adaptable solution for enhancing the automation capabilities of existing agricultural machinery.

Camera-based approaches have been proposed to replace expensive LiDAR or RTK-based solutions for unmanned tractors operating in an orchard-like environment. However, constructing a completely autonomous navigation stack with a low-cost camera and onboard computer presents extra difficulties, such as a computationally efficient perception model, rapid path planning without costly SLAM, and the capacity to avoid obstacles. In this dissertation, a novel vision-based autonomous navigation stack is devised to address the challenges of the wholly autonomous tractor using a stereo camera and an inertial measurement unit (IMU) that are inexpensive. The computational pipeline consists primarily of three modules: (1) the multi-task perception network, (2) the multi-feature fusion algorithm, and (3) the motion planning module. The multi-task perception network detects tree trunks, obstacles, and traversable areas simultaneously from the RGB image with high efficiency (69 FPS) and accuracy (mAP@.5 of 96.7% and mIoU of 98.1%). For global path planning and trajectory prediction, the multi-feature fusion algorithm integrates the downsized navigational features and transforms them into the tractor frame from the image frame. The motion planning module then uses the fused data to plan the appropriate path (i.e., the center of the tree row, U-turn path, or J-turn path) and search for the optimal trajectory utilizing the optimized dynamic window approach (DWA) for path tracking. In the peach orchard, the proposed stack is implemented on our autonomous tractor prototype, and its efficacy is demonstrated compared with the human-driving tractor.

Accurate and precise spraying in orchards is paramount for optimized agricultural practices, ensuring efficient pesticide utilization, minimized environmental impact, and enhanced crop yield by targeting specific areas with the right amount of treatment. The asymmetrical distribution of foliage and flowers in peach orchards poses a formidable challenge to achieving precise spray accuracy, impeding the uniform application of treatments and compromising the overall efficacy of pest and disease control measures. In response to the prevailing challenges in achieving accurate spray application caused by the asymmetrical distribution of foliage and flowers in peach orchards, this dissertation introduces a novel deep neural network to map the RGB image and corresponding depth to the density map of peach flowers or foliage. The model consists of components: (1) two backbones based on ResNet-50 that extract contextual features from the RGB image and depth features from depth data at multiple scales and levels; (2) an optimized depth-enhanced module that effectively fuses the distinct features extracted from the two input streams; and (3) a two-stage decoder that aggregates the high-level cross-modal features to regress the coarse density map and subsequently integrates it with the low-level cross-modal features for final density map prediction. To evaluate the performance of our model, we collected 493 frames (206,095 instances) of peach flowers and 475 frames (350,833 instances) of foliage from the peach orchards, utilizing our intelligent sprayer prototype equipped with stereo cameras. The proposed method outperforms state-of-the-art models on our datasets, demonstrating the superiority and efficacy of encoding canopy characteristics in the form of flower and foliage density maps for blossom and cover sprays.

Share

COinS
 
 

To view the content in your browser, please download Adobe Reader or, alternately,
you may Download the file to your hard drive.

NOTE: The latest versions of Adobe Reader do not support viewing PDF files within Firefox on Mac OS and if you are using a modern (Intel) Mac, there is no official plugin for viewing PDF files within the browser window.