Categories: News

US scientists may have developed the first robot syllabus that allows machines to transfer skills without human intervention

Robots struggle to learn from each other, and rely on human instruction

New research from UC Berkeley shows that the process could be automated

This would eliminate the struggles of manually training robots

Despite robots being increasingly integrated into real-world environments, one of the major challenges in robotics research is ensuring the devices can adapt to new tasks and environments efficiently.

Traditionally, training to master specific skills requires large amounts of data and specialized training for each robot model – but to overcome these limitations, researchers are now focusing on creating computational frameworks that enable the transfer of skills across different robots.

A new development in robotics comes from researchers at UC Berkeley, who have introduced RoVi-Aug – a framework designed to augment robotic data and facilitate skill transfer.

The challenge of skill transfer between robots

To ease the training process in robotics, there is a need to be able to transfer learned skills from one robot to another even if these robots have different hardware and design. This capability would make it easier to deploy robots in a wide range of applications without having to retrain each one from scratch.

However, in many current robotics datasets there is an uneven distribution of scenes and demonstrations. Some robots, such as the Franka and xArm manipulators, dominate these datasets, making it harder to generalize learned skills to other robots.

To address the limitations of existing datasets and models, the UC Berkeley team developed the RoVi-Aug framework which uses state-of-the-art diffusion models to augment robotic data. The framework works by producing synthetic visual demonstrations that vary in both robot type and camera angles. This allows researchers to train robots on a wider range of demonstrations, enabling more efficient skill transfer.

The framework consists of two key components: the robot augmentation (Ro-Aug) module and the viewpoint augmentation (Vi-Aug) module.

Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors

The Ro-Aug module generates demonstrations involving different robotic systems, while the Vi-Aug module creates demonstrations captured from various camera angles. Together, these modules provide a richer and more diverse dataset for training robots, helping to bridge the gap between different models and tasks.

“The success of modern machine learning systems, particularly generative models, demonstrates impressive generalizability and motivated robotics researchers to explore how to achieve similar generalizability in robotics,” Lawrence Chen (Ph.D. Candidate, AUTOLab, EECS & IEOR, BAIR, UC Berkeley) and Chenfeng Xu (Ph.D. Candidate, Pallas Lab & MSC Lab, EECS & ME, BAIR, UC Berkeley), told Tech Xplore.

Original Author: Efosa Udinmwen | Source: TechRadar

Akshit Behera

Share
Published by
Akshit Behera

Recent Posts

Trump administration’s deal is structured to prevent Intel from selling foundry unit | TechCrunch

The deal allows the U.S. to take more equity in Intel if the company doesn't…

6 months ago

3 Apple Watches are rumored to arrive on September 9 – these are the models to expect

We're expecting two new models alongside the all-new Apple Watch Series 11. | Original Author:…

6 months ago

Fujitsu is teaming with Nvidia to build probably the world’s fastest AI supercomputer ever at 600,000 FP8 Petaflops – so Feyman GPU could well feature

Japan’s FugakuNEXT supercomputer will combine Fujitsu CPUs and Nvidia GPUs to deliver 600EFLOPS AI performance…

6 months ago

Microsoft fires two more employees for participating in Palestine protests on campus

Microsoft has fired two more employees who participated in recent protests against the company’s contracts…

6 months ago

Microsoft launches its first in-house AI models

Microsoft announced its first homegrown AI models on Thursday: MAI-Voice-1 AI and MAI-1-preview. The company…

6 months ago

Life 3.0 – Being Human in the Age of Artificial Intelligence by Max Tegmark

A comprehensive review of Max Tegmark's Life 3.0, exploring the future of artificial intelligence and…

6 months ago