Content


VIRAL: Visual Sim-to-Real at Scale for
Humanoid Loco-Manipulation

Tairan He*     Zi Wang*     Haoru Xue*     Qingwei Ben*
Zhengyi Luo     Wenli Xiao     Ye Yuan     Xingye Da     Fernando Castañeda
Shankar Sastry     Changliu Liu     Guanya Shi     Linxi "Jim" Fan     Yuke Zhu
*Equal Contributions;   GEAR Team Leads

Autonomous Loco-Manipulation

Time Lapse

Consecutive Successes


Visual Randomization in Simulation

All Randomization

Dome Light Rand


Image Rand

Material Rand


Key Teacher Elements

Delta Action Space & Reference State Initialization (RSI)

Delta Action Space & Reference State

Key Sim2Real Elements

Finger SysID



FOV Alignment

FOV Alignment

Compute Scaling for Teacher-Student Training

Scaling Compute for Teacher

Teacher Scaling Law

Scaling Compute for Student

Student Scaling Law

Generalization #1: Tray Position - Y Axis

Generalization #2: Tray Position - X Axis

Generalization #3: Cylinder Position

Generalization #4: Robot Position - Y Axis

Generalization #5: Robot Position - X Axis

Generalization #6: Table Height

Generalization #7: Lightening Conditions

Generalization #8: Table Cloth Color

Generalization #9: Table Type

Generalization #10: Object

Our Visual Sim2Real Journey

The first RGB-based sim2real deployment for visual arm reaching
May 30, 2025
Visual IK Sim2Real: Sign of Life
June 11, 2025
The first RGB-based sim2real deployment of visual grasping
June 19, 2024
Open-loop relaying teacher action in real for sanity check
July 8, 2025
First grasping semi-works
July 13, 2025
Grasping does not work still
July 25, 2025
Finger Primitive SysID
July 28, 2025
From Grasping to PreGrasping
July 31, 2025
Grasping finally works
Aug 06, 2025
Grasping OOD Objects
Aug 07, 2025
Exploration but no improvement
Aug 15, 2025
Sim2Real works for walking to table and standing
Aug 23, 2025
Walk-Stand-Grasp: Sim2Real works
Oct 05, 2025
Walk-Stand-Drop-Grasp-Turn: First Sim2Real
Oct 20, 2025
Walk-Stand-Drop-Grasp-Turn: Tuning and Trying Again
Oct 23, 2025
Walk-Stand-Drop-Grasp-Turn: Sign of Life
Oct 31, 2025
Walk-Stand-Drop-Grasp-Turn: 54 Cycles of Loco-Manipulation
Nov 10, 2025

The First RGB-based Sim2Real for Reaching

May 30, 2025: The task is to reach the green/red box based on the visual input. Red box to close fingers, and green box to open fingers.

Failure Cases

Abstract

A key barrier to the real-world deployment of humanoid robots is the lack of autonomous loco-manipulation skills. We introduce VIRAL, a visual sim-to-real framework that learns humanoid loco-manipulation entirely in simulation and deploys it zero-shot to real hardware. VIRAL follows a teacher-student design: a privileged RL teacher, operating on full state, learns long-horizon loco-manipulation using a delta action space and reference state initialization. A vision-based student policy is then distilled from the teacher via large-scale simulation with tiled rendering, trained with a mixture of online DAgger and behavior cloning. We find that compute scale is critical: scaling simulation to tens of GPUs (up to 64) makes both teacher and student training reliable, while low-compute regimes often fail. To bridge the sim-to-real gap, VIRAL combines large-scale visual domain randomization over lighting, materials, camera parameters, image quality, and sensor delays—with real-to-sim alignment of the dexterous hands and cameras. Deployed on a Unitree G1 humanoid, the resulting RGB-based policy performs continuous loco-manipulation for up to 54 cycles, generalizing to diverse spatial and appearance variations without any real-world fine-tuning, and approaching expert-level teleoperation performance. Extensive ablations dissect the key design choices required to make RGB-based humanoid loco-manipulation work in practice.

Method

There are three steps in the VIRAL framework:

  1. Teacher Training with Privileged Information: A privileged RL teacher with full state access learns long-horizon loco-manipulation using delta action space and reference state initialization.
  2. Student Distillation at Scale: A vision-based student policy is distilled from the teacher via large-scale simulation with tiled rendering, trained using a mixture of online DAgger and behavior cloning across tens of GPUs.
  3. Sim-to-Real Transfer: Large-scale visual domain randomization combined with real-to-sim alignment of dexterous hand and camera parameters enables zero-shot deployment to real hardware.
VIRAL Framework Pipeline
Visual Randomization

BibTeX

@article{he2025viral,
              title={VIRAL: Visual Sim-to-Real at Scale for Humanoid Loco-Manipulation},
              author={He, Tairan and Wang, Zi and Xue, Haoru and Ben, Qingwei and Luo, Zhengyi and Xiao, Wenli and Yuan, Ye and Da, Xingye and Castañeda, Fernando and Sastry, Shankar and Liu, Changliu and Shi, Guanya and Fan, Linxi and Zhu, Yuke},
              journal={arXiv preprint arXiv:2511.15200},
              year={2025}
            }