uv pip install rfx-sdkThree commands. Any robot. Any policy.
rfx record --robot so101 --repo-id my-org/demos --episodes 10
rfx deploy runs/my-policy --robot so101
rfx deploy hf://user/my-policy --robot go2 --duration 60ROS was built for message passing between components. We're in a different era -- the workflow is collect demos, train a policy, deploy, iterate. rfx is built from scratch for that loop.
- Three commands --
rfx record,rfx deploy,rfx doctor-- that's the whole CLI - Three-method robot interface --
observe(),act(),reset()-- same API for sim and real - Self-describing models -- save once, load anywhere, deploy with zero config
- HuggingFace Hub native -- push and pull policies like you push datasets
- Rust core for real-time control, Python SDK for fast research
- Zenoh transport underneath -- invisible plumbing, there when you need it
uv pip install rfx-sdkOr with pip:
pip install rfx-sdkFrom source:
git clone https://github.com/quantbagel/rfx.git && cd rfx
bash scripts/setup-from-source.shAll CLI commands can also be run directly with uv:
uv run rfx deploy runs/my-policy --robot so101
uv run rfx record --robot so101 --repo-id demos --episodes 10Collect teleoperation demonstrations into a LeRobot dataset:
rfx record --robot so101 --repo-id my-org/demos --episodes 10# Or from Python
rfx.collection.collect("so101", "my-org/demos", episodes=10)Load a trained policy and run it on hardware. One command:
# From a saved checkpoint
rfx deploy runs/my-policy --robot so101
# From HuggingFace Hub
rfx deploy hf://rfx-community/go2-walk-v1 --robot go2
# From a Python file with @rfx.policy
rfx deploy my_policy.py --robot so101
# Test without hardware
rfx deploy runs/my-policy --robot so101 --mock# Or from Python
rfx.deploy("runs/my-policy", robot="so101")
rfx.deploy("hf://user/policy", robot="go2", duration=30)Deploy handles everything: load weights, resolve robot config, connect hardware, run the control loop with rate control and jitter tracking, clean shutdown on Ctrl+C.
Every robot -- simulated or real -- implements three methods:
obs = robot.observe() # {"state": Tensor(1, 64), "images": ...}
robot.act(action) # Tensor(1, 64)
robot.reset()A policy is any callable Dict[str, Tensor] -> Tensor. Use @rfx.policy to make it deployable from the CLI:
# my_policy.py
import torch
import rfx
@rfx.policy
def hold_position(obs):
return torch.zeros(1, 64) # hold stillrfx deploy my_policy.py --robot so101For named joint control instead of raw tensor indices:
@rfx.policy
def grasp(obs):
return rfx.MotorCommands(
{"gripper": 0.8, "wrist_pitch": -0.2},
config=rfx.SO101_CONFIG,
).to_tensor()Every saved model is a self-describing directory:
policy.save("runs/go2-walk-v1",
robot_config=config,
normalizer=normalizer,
training_info={"total_steps": 50000})runs/go2-walk-v1/
rfx_config.json # architecture + robot + training metadata
model.safetensors # weights
normalizer.json # observation normalizer state
Push to Hub, load anywhere:
rfx.push_policy("runs/go2-walk-v1", "rfx-community/go2-walk-v1")
loaded = rfx.load_policy("hf://rfx-community/go2-walk-v1")| Robot | Type | Interface | Status |
|---|---|---|---|
| SO-101 | 6-DOF arm | USB serial (Rust driver) | Ready |
| Unitree Go2 | Quadruped | Ethernet (Zenoh transport) | Ready |
| Unitree G1 | Humanoid | Ethernet (Zenoh transport) | In progress |
Custom robots: implement observe() / act() / reset() or write a YAML config.
robot = rfx.SimRobot.from_config("so101.yaml", backend="genesis", viewer=True)
robot = rfx.SimRobot.from_config("go2.yaml", backend="mjx", num_envs=4096)
robot = rfx.MockRobot(state_dim=12, action_dim=6) # zero deps, for testingMIT