New autonomous robotic technology developed by Monash University researchers has the potential to address labour shortages and an increased demand for fresh produce.
A research team, led by Dr Chao Chen in Monash University’s Department of Mechanical and Aerospace Engineering, has developed an autonomous harvesting robot capable of identifying, picking and depositing apples in as little as seven seconds at full capacity.
Monash University engineers have developed a robot capable of performing autonomous apple harvesting.
Following extensive trials in February and March at Fankhauser Apples in Drouin, Victoria, the robot was able to harvest more than 85 per cent of all reachable apples in the canopy as identified by its vision system.
Of all apples harvested, less than six per cent were damaged due to stem removal. Apples without stems can still be sold, but don’t necessarily fit the cosmetic guidelines of some retailers.
With the robot limited to half its maximum speed, the median harvest rate was 12.6 seconds per apple. In streamlined pick-and-drop scenarios, the cycle time reduced to roughly nine seconds.
By using the robot’s capacity speed, individual apple harvesting time can drop to as little as seven seconds.
“Our developed vision system can not only positively identify apples in a tree within its range in an outdoors orchard environment by means of deep learning, but also identify and categorise obstacles, such as leaves and branches, to calculate the optimum trajectory for apple extraction,” Dr Chen, the director of Laboratory of Motion Generation and Analysis (LMGA), said.
Automatic harvesting robots, while a promising technology for the agricultural industry, pose challenges for fruit and vegetable growers.
Robotic harvesting of fruit and vegetables require the vision system to detect and localise the produce. To increase the success rate and reduce the damage of produce during the harvesting process, information on the shape, and stem-branch joint location and orientation are also required.
To counter this problem, researchers created a state-of-the-art motion-planning algorithm featuring fast-generation of collision-free trajectories to minimise processing and travel times between apples, reducing harvesting time and maximising the number of apples that can be harvested at a single location.
The robot’s vision system can identify more than 90 per cent of all visible apples seen within the camera’s view from a distance of approximately 1.2m. The system can work in all types of lighting and weather conditions, including intense sunlight and rain, and takes less than 200 milliseconds to process the image of an apple.