COMFI: A Multimodal Industrial Human Motion Dataset for Markerless Motion Capture and Collaborative Robotics

1 LAAS-CNRS, Toulouse, France
2 NaturalPad, Montpellier, France
3 Image and Pervasive Access Laboratory (IPAL), Singapore
International Journal of Robotics Research (IJRR) 2025

COMFI fuses multi-view RGB, marker-based MoCap, forces, and robot data to benchmark markerless pose estimation and ergonomics for enhanced human–robot collaboration.

Abstract

COMFI (human-robot Collaboration Oriented Markerless For Industry) is a multimodal dataset designed to advance markerless motion capture, ergonomics, and Human–Robot Collaboration (HRC) in factory settings. COMFI contains 4.5 hours of synchronized and spatially co-registered streams acquired from 18 participants performing 24 tasks that span everyday movements (e.g., walking, sit-to-stand) and ergonomically demanding industrial actions (lifting, overhead work, bolting, sanding, welding), with the addition of two HRC scenarios in which a Franka Emika Panda is guided by the human while holding a tool. For a total of 86.5Go of data, it includes: calibrated multi-view RGB videos (40Hz), optical motion capture markers and joint centers positions, as well as joint angles (100 and 40Hz), 6D ground-reaction forces (1000 and 40Hz), and robot telemetry (200 and 40Hz). Camera intrinsics/extrinsics, global triggers, and software-barrier synchronization for webcams are distributed, along with participant-scaled human Universal Robot Description Files that adhere to International Society of Biomechancis conventions, enabling kinematics, dynamics, and torque estimation. Videos are anonymized while preserving facial cues useful to markerless pipelines. Accompanying code supports loading, calibration, and visualization. COMFI enables rigorous benchmarking of markerless pose estimation under occlusion and clutter against reference systems, allowing to extend current state-of-the-art algorithms to complex industrial scenarios. COMFI is expected to catalyze reproducible, cross-disciplinary research toward safer, more ergonomic HRC.

Dataset organization

Pick a participant's ID & type of task and click a box to see the composing data and the paths. Note that robot and forces data are not available for all tasks.

Dataset code examples

Shows the command to run the examples and a brief video of what the result looks like.

BibTeX

@article{chalabi2025comfi
  title={COMFI: A Multimodal Industrial Human Motion Dataset for Markerless Motion Capture and Collaborative Robotics},
  author={Chalabi Kahina and Sabbah Maxime and Gouget Nicolas and Adjel Mohamed and Saurel Guilhem and Wojciechowski Krzysztof and Watier Bruno and Bonnet Vincent},
  journal={International Journal of Robotics Research},
  year={2025},
  url={https://comfi-gepetto.github.io/}
}