The centre is employing machine learning algorithms along with computer vision techniques, and integrating it with novel hardware, including NVIDIA GPUs, to create some of the world’s smartest pick-and-place robots that can see and do things in the real world.
From fruit sorting to recycling to warehousing, a lot of objects are processed by people. Many of these tasks are repetitive and can be handled by smart robots.
Australia has a vast expanse of farmland that is home to more than 85,000 farms that employs more than 300,000 workers. Yet, there is simply not enough them.
“In Australia, there is a labour shortage for farming. It is painful and manual labour-intensive. Robotic systems will not just benefit the farmers but the people working on the farms,” said Leitner.
Though robots have been deployed in industrial applications, these tend to be rigid and programmed to do the same tasks repeatedly, such as manufacturing a car or a smartphone.
However, that’s not the case with agriculture farming because each type of produce, such as an apple, banana or capsicum, is slightly different than the one that was picked up a second ago.
“Robotic systems need to become smarter and be able to deal with uncertainties. It’s more than just grasping but teaching the robot what to do after it picks up an object,” said Leitner.
“For instance, if you want it to pick up a knife and put it down, it does not matter if the robot picks the knife up at the handle. However, if the intention is to hand the knife to a human being, then it’s better to pick the knife up at the blade. This sort of reasoning about grasping is something that the centre is looking at,” he added.
He admitted that there are hardware limitations as robots do not have very complex human-like hands and cannot see or perceive exactly what a human eye can.
As such, the centre is trying to combine visual feedback and understanding with robotic systems, a process that involves more than “just sticking a camera on a robotic system.”
It is aiming to build robotic vision systems that can close the loop system. For example, if you take a picture of a cat, the computer or phone recognises it as a cat, but for robotic systems, that’s only the first step. Then it must figure out what to do next. For a self-driving car, after it detects the cat, it must determine its next course of action, such as braking or turning to avoid the cat.