A major obstacle on the road to full automation is that the robot cannot find the edge of the blade without humans. Blade positioning is adjusted manually before each repairment session. Without being able to tell where exactly is the edge affected by oil and sand exposure, the robot cannot latch and start working independently.
Once on the edge, the robot uses geopositioning to figure out possible trajectories for the repair arm. Positioning calibration requires a reference point, and the current technology still requires human input to find that point. A self-sufficient robot would have to handle this by itself, too.
Wind turbines cost €1,700/hour to repair. Current robotic solution still relies on human guidance.
Self-sufficient robot should learn how to latch on the blade edge. And don’t get lost while repairing it.
Computer vision helps the robot spot damaged areas and get to them. Light neural network pinpoints reference point for geopositional calibration.
Arm camera is also used because it is a fisheye one. Such cameras provide a panoramic view (a great increase in robot’s safety) at the expense of distorting images. As a result, simply teaching the robot to process them is not enough: it does not see the way regular cameras or humans do. Image cropping with the use of machine learning mitigates this challenge.
To make the robot latch without human input, we taught it how to measure the distance between itself and the edge of the turbine blade. Classic computer vision methods helped to achieve a near-perfect distance prediction; the little deviation has no impact on BR-8’s work and is not a source of danger for the robot. CV also enables BR-8 to measure the angle between the robot and the blade edge, so manual repositioning of blades is not required for repairs.
For geopositioning calibration purposes, we are working with an arm attachment that has four dispensers on it. Our algorithm finds a particular dispenser and transmits its placement so that a proprietary Rope Robotics solution can learn the new geolocation. This task is impossible to solve with only computer vision. We are employing a lightweight neural network to segment fisheye images and “show” the robot where to look for the center of the reference dispenser.
Chief Executive Officer
The autonomous robot latching solution is complete, and it does the job both indoor for testing purposes and outdoor for real-life use. Geopositioning calibration is still a work in progress: the current version can be improved upon before moving into production. A later iteration would function with just one dispenser on the arm attachment instead of four.
Both solutions work on Python, which makes them light enough to install on a Raspberry Pi. Using a low power consumption implementation is vital for outdoor work, as wasting time on robot recharging increases the downtime for wind turbines.
As turbine manufacturers get to lower their warranty expenses with the help of Rope Robotics, they may get more competitive about the pricing. With alarming reports on climate change, easier access to alternative energy is welcome more than ever.
Technology: Computer Vision, Machine Learning, Neural Network
Tools and framework: Python, PSPNet
Chief Executive Officer