Dexterity doesn't transfer.
Robots learn one demonstration at a time. Nothing scales across tasks, embodiments, or operators.
CHIROS is a data-embodying instrument that captures human dexterity in physical-AI-ready language.
Robots learn one demonstration at a time. Nothing scales across tasks, embodiments, or operators.
It captures how you drive a robot, not how you perform a task. The embodiment becomes the ceiling.
No pressure. No timing. No intent. Pixels describe a scene; they don't describe a grasp.
A hand, rendered as something technology can learn from.
Every joint angle is ground truth, not an estimate.
That's our vision. Not a slogan we reverse-engineered, the thing we embarked on in 2019, long before the humanoid wave, long before Physical AI had a name.
We saw early that this transfer needed a shared instrument. First, a tool that could acknowledge the human dimension: led by biomechatronics, ergonomics, and the way a hand actually moves under load. At the same time, it needed to be native to the technology dimension: deterministic, absolute, reproducible; legible to robots, to AI, to pattern-finding systems that need to locate themselves in data.
CHIROS is that instrument. We designed it for our own work in dexterity research, and we're releasing it because the transfer is too important, and too hard, for any single team to do alone.
Imagery from real capture sessions coming in May 2026.
Built in the EU. One-time purchase. Owned forever.