Earth Science News
ROBO SPACE
Can robots learn from machine dreams?
MIT CSAIL researchers (left to right) Alan Yu, an undergraduate in electrical engineering and computer science (EECS); Phillip Isola, associate professor of EECS; and Ge Yang, a postdoctoral associate, developed an AI-powered simulator that generates unlimited, diverse, and realistic training data for robots. Robots trained in this virtual environment can seamlessly transfer their skills to the real world, performing at expert levels without additional fine-tuning. Credits:Photo: Michael Grimmett/MIT CSAIL
Can robots learn from machine dreams?
by Rachel Gordon | MIT CSAIL
Boston MA (SPX) Nov 20, 2024

For roboticists, one challenge towers above all others: generalization - the ability to create machines that can adapt to any environment or condition. Since the 1970s, the field has evolved from writing sophisticated programs to using deep learning, teaching robots to learn directly from human behavior. But a critical bottleneck remains: data quality. To improve, robots need to encounter scenarios that push the boundaries of their capabilities, operating at the edge of their mastery. This process traditionally requires human oversight, with operators carefully challenging robots to expand their abilities. As robots become more sophisticated, this hands-on approach hits a scaling problem: the demand for high-quality training data far outpaces humans' ability to provide it.

Now, a team of MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers has developed a novel approach to robot training that could significantly accelerate the deployment of adaptable, intelligent machines in real-world environments. The new system, called "LucidSim," uses recent advances in generative AI and physics simulators to create diverse and realistic virtual training environments, helping robots achieve expert-level performance in difficult tasks without any real-world data.

LucidSim combines physics simulation with generative AI models, addressing one of the most persistent challenges in robotics: transferring skills learned in simulation to the real world. "A fundamental challenge in robot learning has long been the 'sim-to-real gap' - the disparity between simulated training environments and the complex, unpredictable real world," says MIT CSAIL postdoc Ge Yang, a lead researcher on LucidSim. "Previous approaches often relied on depth sensors, which simplified the problem but missed crucial real-world complexities."

The multipronged system is a blend of different technologies. At its core, LucidSim uses large language models to generate various structured descriptions of environments. These descriptions are then transformed into images using generative models. To ensure that these images reflect real-world physics, an underlying physics simulator is used to guide the generation process.

The birth of an idea: From burritos to breakthroughs
The inspiration for LucidSim came from an unexpected place: a conversation outside Beantown Taqueria in Cambridge, Massachusetts. ??"We wanted to teach vision-equipped robots how to improve using human feedback. But then, we realized we didn't have a pure vision-based policy to begin with," says Alan Yu, an undergraduate student in electrical engineering and computer science (EECS) at MIT and co-lead author on LucidSim. "We kept talking about it as we walked down the street, and then we stopped outside the taqueria for about half-an-hour. That's where we had our moment."

To cook up their data, the team generated realistic images by extracting depth maps, which provide geometric information, and semantic masks, which label different parts of an image, from the simulated scene. They quickly realized, however, that with tight control on the composition of the image content, the model would produce similar images that weren't different from each other using the same prompt. So, they devised a way to source diverse text prompts from ChatGPT.

This approach, however, only resulted in a single image. To make short, coherent videos that serve as little "experiences" for the robot, the scientists hacked together some image magic into another novel technique the team created, called "Dreams In Motion." The system computes the movements of each pixel between frames, to warp a single generated image into a short, multi-frame video. Dreams In Motion does this by considering the 3D geometry of the scene and the relative changes in the robot's perspective.

"We outperform domain randomization, a method developed in 2017 that applies random colors and patterns to objects in the environment, which is still considered the go-to method these days," says Yu. "While this technique generates diverse data, it lacks realism. LucidSim addresses both diversity and realism problems. It's exciting that even without seeing the real world during training, the robot can recognize and navigate obstacles in real environments."

The team is particularly excited about the potential of applying LucidSim to domains outside quadruped locomotion and parkour, their main test bed. One example is mobile manipulation, where a mobile robot is tasked to handle objects in an open area; also, color perception is critical. "Today, these robots still learn from real-world demonstrations," says Yang. "Although collecting demonstrations is easy, scaling a real-world robot teleoperation setup to thousands of skills is challenging because a human has to physically set up each scene. We hope to make this easier, thus qualitatively more scalable, by moving data collection into a virtual environment."

Who's the real expert?
The team put LucidSim to the test against an alternative, where an expert teacher demonstrates the skill for the robot to learn from. The results were surprising: Robots trained by the expert struggled, succeeding only 15 percent of the time - and even quadrupling the amount of expert training data barely moved the needle. But when robots collected their own training data through LucidSim, the story changed dramatically. Just doubling the dataset size catapulted success rates to 88 percent. "And giving our robot more data monotonically improves its performance - eventually, the student becomes the expert," says Yang.

"One of the main challenges in sim-to-real transfer for robotics is achieving visual realism in simulated environments," says Stanford University assistant professor of electrical engineering Shuran Song, who wasn't involved in the research. "The LucidSim framework provides an elegant solution by using generative models to create diverse, highly realistic visual data for any simulation. This work could significantly accelerate the deployment of robots trained in virtual environments to real-world tasks."

From the streets of Cambridge to the cutting edge of robotics research, LucidSim is paving the way toward a new generation of intelligent, adaptable machines - ones that learn to navigate our complex world without ever setting foot in it.

Research Report:Learning Visual Parkour from Generated Images

Related Links
Computer Science and Artificial Intelligence Laboratory (CSAIL)
All about the robots on Earth and beyond!

Subscribe Free To Our Daily Newsletters
Tweet

RELATED CONTENT
The following news reports may link to other Space Media Network websites.
ROBO SPACE
Understanding the sense of self through robotics
Paris, France (SPX) Nov 20, 2024
A recent review published in 'Science Robotics' delves into the complex human concept of the "sense of self" and how robotics could be instrumental in furthering this understanding. The paper, authored by cognitive roboticist Agnieszka Wykowska from the Istituto Italiano di Tecnologia (IIT), cognitive psychologist Tony Prescott of the University of Sheffield, and psychiatrist Kai Vogeley from the University of Cologne, discusses potential applications of robots in modeling and studying this phenomenon. ... read more

ROBO SPACE
China zeroes in on 'common' disputes in wake of deadly attacks

Center for Catastrophe Modeling advances disaster preparedness solutions

Indonesia digs out as flooding, landslide death toll hits 20

The future of energy-efficient edge AI sensors

ROBO SPACE
Enormous potential for rare Earth elements found in US coal ash

Bye bye microplastics new plastic is ocean degradable and recyclable

Impossible objects brings high-speed CBAM 25 series 3D printer to Europe

Tunable ultrasound propagation in microscale metamaterials

ROBO SPACE
To design better water filters, MIT engineers look to manta rays

Quantum physics reveals role in rising ocean temperatures

Future of deep-sea mining stands at a crucial juncture

Extreme weather threatens Canada's hydropower future

ROBO SPACE
Under-ice species face threat as Arctic ice melts

Researchers link Arctic warming to rising dust emissions impacting cloud formation

Increased snowfall could preserve Patagonian glaciers with immediate emissions cuts

Political implications of Antarctic geoengineering debated

ROBO SPACE
Seed industry hopes innovation can sow success

New sensor technology enhances plant monitoring and health management

Scientists seek miracle pill to stop methane cow burps

Is there enough land on Earth to fight climate change and feed the world?

ROBO SPACE
16 dead, seven missing in Indonesia flood: disaster agency

Lava covers parking lot at famed Iceland geothermal spa

Libya's Derna hosts theatre festival year after flash flood

Philippines typhoon death toll rises to 12

ROBO SPACE
Burkina freezes assets of more than 100 people over 'financing of terrorism'

How will Senegal's new leaders use their legislative landslide?

UK doubles aid to war-torn Sudan

World not listening to us, laments Kenyan climate scientist at COP29

ROBO SPACE
Neanderthal tar-making structure unearthed in Gibraltar sheds light on their advanced skills

Why the powerful are more likely to cheat

Healthy elbow room: Social distancing in ancient cities

Great apes track events with their eyes like humans do

Subscribe Free To Our Daily Newsletters




The content herein, unless otherwise known to be public domain, are Copyright 1995-2024 - Space Media Network. All websites are published in Australia and are solely subject to Australian law and governed by Fair Use principals for news reporting and research purposes. AFP, UPI and IANS news wire stories are copyright Agence France-Presse, United Press International and Indo-Asia News Service. ESA news reports are copyright European Space Agency. All NASA sourced material is public domain. Additional copyrights may apply in whole or part to other bona fide parties. All articles labeled "by Staff Writers" include reports supplied to Space Media Network by industry news wires, PR agencies, corporate press officers and the like. Such articles are individually curated and edited by Space Media Network staff on the basis of the report's information value to our industry and professional readership. Advertising does not imply endorsement, agreement or approval of any opinions, statements or information provided by Space Media Network on any Web page published or hosted by Space Media Network. General Data Protection Regulation (GDPR) Statement Our advertisers use various cookies and the like to deliver the best ad banner available at one time. All network advertising suppliers have GDPR policies (Legitimate Interest) that conform with EU regulations for data collection. By using our websites you consent to cookie based advertising. If you do not agree with this then you must stop using the websites from May 25, 2018. Privacy Statement. Additional information can be found here at About Us.