Context

Itekube-7 is a touch gesture dataset of interaction gestures defined for a sequential classification task. Unlike symbolic gestures, whose features are solely spatial, interaction gestures require both spatial and temporal analysis to define them properly. The seven classes of the dataset were chosen for their versatility and in specific cases strong correlations.

This dataset was created with variability in mind at every step:

  • The protocol given to perform gestures was minimal, in order to grasp every possible way of performing the said gesture and obtain multiple borderline cases. We could observe variability in speed, positioning, used fingers, timing or amplitude in trajectories.
  • Scale and orientation were free.
  • Participants were chosen from different area of work, with age ranging from 12 to 62.

Details

  • 7 classes

Itekube-7 classes

  • 27 users
  • 6591 gestures

Note: In the original paper, the gestures from users 2, 3, 9, 12, 17, 23 and 27 were used to produce the test set.

Downloads

Each gesture was saved as an individual xml file, containing the gesture class and the participant id as metadata, and a series of (x,y,t) contacts.

You can download the dataset in two formats. The raw version, an archive containing all files:

-> Archive version <-

or ready-to-use TFRecords where gestures were sampled to 10 points using our dynamic sampling. A TFRecord example possess 2 int64 features, « framecount » and « label », and a feature list « positions » to be resized to a [3,10,2] tensor.

 -> TFRecords version <-

We also propose a script example in Python in order to make training and testing TFRecords from the raw dataset. Even if you are not working with Tensorflow, the methods in this script should help you make your own parser.

-> Itekube-7 parser <-

Acknowledgements

If you use this dataset, please reference this paper:

Quentin Debard, Christian Wolf, Stéphane Canu and Julien Arné
Learning to recognize touch gestures: recurrent vs. convolutional features and dynamic sampling
in the 13th IEEE Conference on Automatic Face and Gesture Recognition (FG 2018)