SO101-Nexus is a simulation library providing Gymnasium-compatible environments for the SO-101 robot arm. It supports multiple simulation backends (ManiSkill, Genesis, and MuJoCo) so that researchers can train and evaluate policies without being locked into a single physics engine. SO101-Nexus integrates seamlessly with the LeRobot ecosystem.
Robust robot policies should generalize across simulators before being deployed in the real world. Sim-to-sim transfer helps identify policies that rely on simulator-specific artifacts.
At the same time, there are very few standardized simulation environments available for the SO-100 and SO-101 robot arms. SO101-Nexus addresses this by providing Gymnasium-compatible environments across multiple physics backends, enabling consistent experimentation and benchmarking.
In addition, SO101-Nexus provides a foundation for training text-conditioned embodied policies via curriculum learning, with environments that expose primitives such as object localization and grasping.
Install only the backend you need:
pip install so101-nexus-mujoco # MuJoCo backend
pip install so101-nexus-maniskill # ManiSkill backendgit clone https://github.com/johnsutor/so101-nexus.git
cd so101-nexus
# Install a single backend for development (swap package name to switch backends)
uv sync --package so101-nexus-mujoco
uv sync --package so101-nexus-maniskill --prerelease=allowSO101-Nexus registers environments with Gymnasium. Any registered environment can be instantiated with gym.make.
import gymnasium as gym
import so101_nexus_maniskill # noqa: F401
env = gym.make(
"ManiSkillPickCubeGoalSO101-v1",
obs_mode="state",
control_mode="pd_joint_delta_pos",
render_mode="rgb_array",
)
obs, info = env.reset()
for _ in range(256):
action = env.action_space.sample()
obs, reward, terminated, truncated, info = env.step(action)
if terminated or truncated:
obs, info = env.reset()
env.close()import gymnasium as gym
import so101_nexus_mujoco # noqa: F401
env = gym.make("MuJoCoPickCubeGoal-v1", render_mode="rgb_array")
obs, info = env.reset()
for _ in range(256):
action = env.action_space.sample()
obs, reward, terminated, truncated, info = env.step(action)
if terminated or truncated:
obs, info = env.reset()
env.close()ManiSkill and Genesis backends support batched simulation for large-scale data collection and training.
For ManiSkill, use native num_envs batching and (when an algorithm expects Gymnasium VectorEnv) the official ManiSkillVectorEnv wrapper:
import gymnasium as gym
import so101_nexus_maniskill # noqa: F401
from mani_skill.vector.wrappers.gymnasium import ManiSkillVectorEnv
base_env = gym.make(
"ManiSkillPickCubeLiftSO101-v1",
obs_mode="state",
control_mode="pd_joint_delta_pos",
num_envs=512,
)
env = ManiSkillVectorEnv(base_env, auto_reset=True, ignore_terminations=False)
obs, info = env.reset()All environments have a maximum episode length of 256 steps.
To list every registered environment ID and run PPO baselines, use examples/README.md:
uv run python examples/list_envs.py- ManiSkill environments for SO-100 and SO-101
- MuJoCo environments for SO-101
- Genesis environments for SO-100 and SO-101
- Provide code for training basic PPO policy on every environment to ensure they work well.
- Add more randomization options to environments (such as robot color, more objects, environment, variety of starting poses, etc)
- Tasks such as looking at objects, general directional commands,
- Add documentation, with demo videos of each environment
- Additional manipulation tasks beyond pick-and-place/lift
- Add environments to the Lerobot Hub
To contribute or customize SO101-Nexus, clone and install with development dependencies:
git clone https://github.com/johnsutor/so101-nexus.git
cd so101-nexus
uv sync --package so101-nexus-mujocoRun the test suite:
make test-mujoco
make test-maniskill
make testFormat and lint:
make format
make lintThis repository's source code is available under the Apache-2.0 License.
