Le 17/08/2021

Robot Decision for Adaptation to Human Behavior, Off-Road and Task Diversity and Safety Preservation

Nowadays, automated robots are adapting to work efficiently in an off-road environment for agriculture.
As part of the FIRA 2020 scientific workshop session "Robot decision for adaptation to human behavior, off-road and task diversity and safety presentation" three speakers shared their insights in three presentations.
Presented by Thibault Clamens, the first presentation is "Real-Time Multispectral Image Processing and Registration on 3D Point Cloud for Vineyard Analysis."
The second one, presented by Koenvan Boheemen, is "Development of a smart obstacle detection system for an autonomous orchard robot."
Finally, Ashley Hill presents the last presentation, "Real-time adaptation of a robot’s behavior to changes in the environment."

Real-Time Multispectral Image Processing and Registration on 3D Point Cloud for Vineyard Analysis

By Thibault Clamens, Georgios Alexakis, Raphaël Duverne, Eric Fauvet, Ralph Seulin, David Fofi of Vitibot., CNRS, ImViA and the University of Bourgogne Franche-Comté.

Nowadays, precision agriculture and precision viticulture are under strong development. To accomplish effective actions, robots require robust perception of the culture and surrounding environment.
Computer vision systems have to enable plant parts identification (branches, stems, leaves, flowers, fruits and vegetables) and their respective health status. They also have to enable various plants information merging, to measure agronomic indicators, to classify them and to extract data to enable the agriculturist or expert to make a relevant decision. We propose a real-time method to acquire, process and register multispectral images to 3D. The sensors’ system consisting of a multispectral camera and a Kinect V2 can be embedded on a ground robot or other terrestrial vehicles. Experiments conducted in the vineyard field demonstrate that it enables agronomic analysis.

Recent agricultural practices require working as close as possible to the plant with specific actuators, carriers and sensors. Robotics and new technologies can address some of the difficulties and provide assistance to agricultural workers. Robotics in agriculture necessitate to achieve more complex tasks by designing new instruments and new smart tools. In this contribution, we present a complete computer vision pipeline embedded on a mobile robot, that fuse geometric and radiometric in formation.

We propose:

  • Acquisitions of several modalities;
  • Multi-spectral image pre-processing and processing methods;
  • Multi-spectral image registration with 3D point cloud.

The data have been acquired in the vineyard. Analyses resulting from this merger are intended to help solve wine-growing problems such as:

  • Analysis of the effectiveness of phytosanitary treatments over time;
  • The temporal follow-up of the vineyard plots;
  • Early detection of plant pathologies

First picture: Multi-sensors system consisting of a Kinect V2 and a multi-spectral camera
Second picture: Summit XL robot instrumented with the multi-sensor system


Development of a smart obstacle detection system for an autonomous orchard robot

By Koenvan Boheemen, Rickvande Zedde and Gookhwan Kim of the Wageningen University and Research and the Rural Development Administration.

One of the most crucial requirements an agricultural robot must meet for commercial success, is the safe execution of tasks. To ensure safety, an agricultural robot must detect and avoid unexpected obstacles, for example people, animals, and equipment. The objective of our work was to research and develop a reliable hardware and software system for smart obstacle detection on an autonomous orchard robot.

Materials and methods

We conducted a literature research on affordable vision sensors and deep learning algorithms that could be used for smart obstacle detection. We investigated four RGB-color and nine RGB-depth (RGB-D) cameras. Regarding the deep learning algorithms, we investigated five object detection algorithms and two instance segmentation algorithms (instance segmentation is a combination of object detection and pixel segmentation). After the literature research, we selected the most suitable camera and algorithm combination which were then integrated onto an edge computer on a Husky A200 mobile robot. We evaluated the obstacle detection performance of the robot by autonomously navigating two orchard paths where different static and dynamic obstacles were present (human, bicycle, tractor, fruit crate and car).

Results and conclusions

From the literature research, we concluded that the Realsense D435 was at that time the most appropriate and affordable camera for the smart obstacle detection. The D435 camera acquires color and depth images at a high quality and a sufficiently high frame rate, which allows the simultaneous obstacle detection (using the color image) and distance measurement to the obstacles (using the depth image). For the deep learning implementation, we concluded that Yolact++ was the preferred algorithm, because it was then one of the fastest instance segmentation algorithms. For utmost safety, instance segmentation is desirable, because the pixel segmentation allows a precise depth measurement to the detected obstacle. We integrated the D435 camera and the Yolact++ algorithm into the robot operating system (ROS) of the Husky robot. The obstacle detection system continuously sent a "safe" signal to ROS if no obstacles were detected (within 3 meters of the robot). The robot was configured to only drive if a "safe" signal was received within the last 0.5 s, guaranteeing the robot would halt upon obstacle detection or algorithm failure. The obstacle detection and stop performance of the Husky robot were tested by approaching the five different obstacles in the two orchard paths (while driving autonomously). All obstacles were successfully detected, and the robot was successfully halted at 3 meters away from the detected obstacle. In addition, the system proved to be fast: the average cycle speed was 4 frames per second (FPS). We concluded that the obstacle detection system was accurate and fast enough to be applied on an autonomous orchard robot.

Real time adaptation of a robot’s behavior to changes in the environment

By Ashley Hill, Eric Lucet and Roland Lenain of the University Paris-Saclay, CEA-List, the University Clermont Auvergne and INRAE.

Advances in the control and perception of mobile robotics have enabled the dissemination of mobile robotics for well-known and well-defined tasks. Whether in an indoor context, such as production lines, or in an outdoor context (autonomous mechanical weeding for example), today’s mobile robots are able to address specific, but limited, use cases in dynamic contexts. Indeed, most algorithms used today are designed to work with multiple sources of sensory information. Thus, the absence of information, or the use of noisy information, makes a control law designed for a known accuracy less efficient. Even if a lot of work exists to determine the quality of perception, the impact of this quality on the robot’s behavior is still under-exploited, leading the robot to stop or perform inappropriate movements. As such, the objective of our work was to determine the control parameters, from this under used information contained in the control loop, using neural networks.

Material and methods

We conducted our works over the gain tuning of an existing steering controller, used in agricultural robotics. This gain tuning was achieved using a neural network, trained offline in a simulation using episodic reinforcement learning. This episodic training is achieved using the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) method, to obtain the desired behavior for the neural network, that minimizes a given objective function. This is done as classical reinforcement learning methods fail to converge due to the Markov hypothesis not being respected in the context of the problem. This method is then trained over multiple trajectories, sliding condition, and perception quality, to train the neural network over a general set of circumstances, allowing the method to generalize to new trajectories and environments. A mobile robot is then used over grass, gravel, and in fields at high speed to validate this methodology.

Results and conclusion

Experiments using the neural gain tuning method, are compared with a deterministic gain tuning method, and a classical constant gain tuning method. They were done on the INRAE’s RobuFast robotic platform, at 4m.s−1. This platform uses the ROS middleware, with a GPS and IMU sensor at 10hz. From these experiments, a decrease in the surface error of 62% and 43% was achieved using the neural gain method, when compared with the constant gain method and the deterministic gain method respectively. This is thanks to the adaptability of the neural gain method, which can change the gains in real time of the steering controller, depending on the sliding condition, GPS accuracy, and path following errors. This method, however, requires a lot of offline training time in a complex simulation, and due to the black box nature of neural networks it cannot guaranty the stability of the controller. Overall, this method shows great promise at adapting the controller’s behavior and reactivity to external situation, in order to increase performance without compromising the behavior of the controller.


Catégories : #Labs