HOH: Markerless Multimodal Human-Object-Human Handover Dataset with Large Object Count

NeurIPS2023 Datasets and Benchmarks
Noah Wiederhold, Ava Megyeri, DiMaggio Paris, Sean Banerjee, Natasha Kholgade Banerjee
Clarkson University, Potsdam, NY

Link to Dataset*
*Password access; email authors for password.


2,720 Handovers    136 Objects    40 People    20 Pairs
Full Markerless 360 Capture

Largest Multi-Object Handover Dataset To Date

Data contained in HOH

4-Viewpoint 30FPS Kinect RGB-D Videos [3M Images]
4-Viewpoint 60FPS FLIR RGB Videos [2.8M Images]
OpenPose Skeletons in Kinect Videos [1.6M Skeletons]
Fused Spatiotemporal 3D Point Clouds [250K Point Clouds]

Giver and Receiver Comfort Ratings
Ground Truth Annotations of Grasp, Transfer, Release Time Events
Ground Truth on Multi-Person Handedness and 28-Class Grasp Type
Ground Truth Giver Hand, Object, and Receiver Hand Segments

Tracked Giver Hand, Object, and Receiver Hand in Kinect RGB [2.5M Segments]
Giver Hand, Object, and Receiver Hand 3D Point Clouds [790K Segments]

Object 3D Models and Metadata [For 136 Objects]

Example Video Sequences

Example 3D Model Alignments

3D Models of Objects in HOH