Technology has a way of taking repetitive tasks and revolutionising them, like laundry operations that rely on people for sorting, folding, and handling materials.
Now in a first-of-its-kind breakthrough, an Australian innovation has shown huge promise in improving commercial laundry processes.
Using an AI-based robotic automation system, it can pick up, flatten and feed clean towels into an automated folding machine, a time-consuming, labour-intensive task, that’s set to grow with the increasing demands on the industry.
Queensland-based family run laundry Consolidated Linen Service (CLS) worked with ARM Hub in the project supported by the state’s Essential Goods and Supply Chain Program to prove the viability of a computer vision driven, bi-manual towel unfolding concept.
Declaring it a success, David Ledger, ARM Hub head of project services, said “there are clear pathways for increasing the throughput and accuracy of the system, and no hard barriers to scale were discovered”.
Huge potential with growing commercial demands
CLS hand feeds some 700,000 individual items of clean linen into automated folding machines, in very hot, humid environments where workers need regular hydration and rest breaks.
“The item must be picked up individually from a hopper, held by the corners to flatten it, and then fed into the folding machine – which takes around six seconds per item,” said Tom Roberts, CLS operations manager.
With labour shortages driven by the pandemic and an eye on the 2023 Olympics, the demands on commercial laundries are expected to skyrocket.
“To take it to the next level and try and cater for the growing market, there is only one way we can do it, and that is automation,” he said.
Promising application of robotic process automation
The first phase of the project aimed to demonstrate it was possible to automate flattening a towel.
Using advanced AI technologies like computer vision, the goal was maximising the throughput potential of the design.
A prototype system with two robotic arms to pick up and unfold the towels was assembled around the observation that when two corners were grasped and stretched apart horizontally, the remaining two corners would usually hang at the new lowest point.
Either one could be grabbed to end up with two adjacent corners.
The arms were mounted on tables at a distance that enabled overlapping ranges of motion and operated at a limited speed to ensure safety.
A container (hopper) half-filled with white towels supplied by CLS was placed next to one of the robot arms.
“The vision system is capable of detecting corners of towels,” said David.
One camera is mounted on a table between the two arms facing the area that both arms can reach, where towels are unfolded.
The other camera is mounted on the corner of the hopper, looking down into it.
One arm reaches down to retrieve a towel and then presents it in such a position that it is visible to the camera and reachable by the other arm.
The systems combines an RGB camera and 3D LIDAR sensor, generating an image with both RGB and depth information for each pixel to enable furthest point detection and corner detection.
The system is coordinated by the scheduler node, which runs through a sequence of actions including collecting and processing camera images, moving the robot arms to various waypoints, and actuating the grippers.
The next phase is integrating with folding machines towards commercial applications.
“With hardware specified for the task and extra computer vision engineering effort, we should be able to push the performance into the range where it is attractive to potential customers,” Ledger said.