+33 (0)6 85 22 07 25 mario.diaznava@st.com

Reducing pesticide use with an autonomous system for automated vegetable weeding.

Within the digital farming area, the use case 2.1 developed by Bordeaux-INP will use the neuromorphic devices developed in ANDANTE to implement an autonomous system for automatic mechanical weeding of vegetables. Indeed, vegetable production requires a wide variety of farming operations and among these operations, early weeding is necessary to avoid competition between weeds and crops. If the competition is too strong, the farmer can suffer enormous crop yield losses. 

To create a solution available to a wide range of cultures and planting parameters, Bordeaux-INP decided to develop an autonomous weeding system embedded on an electric tractor, based on AI vision, to be able to adapt it to different cultural methods without creating a new system. 

Structurally, the weeding block consists of two parts: the vision system (managed by Bordeaux INP) and the binning tools (managed by an external partner).

The vision system is modular and can be transposed to different agriculture applications (yield estimation, vine disease detection, …) and can be attached to different vehicles (tractor, harvest machine, quad, …).

The solution requires high precision, to avoid destruction of the crops while detecting the weeds present in the field. The NeuroCorgi ASIC developed in ANDANTE and deployed in this use case can perform object detection using a deep learning CNN architecture with low latency while keeping a low power consumption. This type of technology is very interesting for digital farming tools in the edge, which must operate completely autonomously. An ultra-low power consumption is required to ensure a high battery autonomy to perform the weeding operation without interruption for the farmer.

Currently, our network architecture has been implemented and tested in PyTorch on an architecture similar to the network implemented on NeuroCorgi ASIC. The network is based on a MobileNetV1-based SSD detector, with the backbone trained on the COCO dataset and the network head on our crop’s images dataset.

Some changes to the network structure were necessary. As the training database for the NeuroCorgi encoder contains few examples of plants, the initial detection results were inadequate. Duplicating some of the encoder layers and making them trainable improved considerably the detection accuracy. The resulting network architecture is presented in Fig.1.

Fig. 1: The adapted network architecture. The figure presents how the duplicated MobileNetV1 layers, and the SSD head are connected to the NeuroCorgi backbone.

Thanks to this simulated environment, we were able to compare the accuracy of the AI model that we are currently using to the new one developed to work on the NeuroCorgi ASIC

The KPI’s we identified to compare the detection are AP50 and AP75, that are often used as object detection metrics, and describe the accuracy of the model. The training and test database are the same to keep in the same configuration and provide a more robust comparison.