I’m happy to announce a new release of Kinetic Space.

The Kinetic Space provides a tool which allows everybody to record and recognize customized gestures using depth images as provided by PrimeSense PS1080, the Kinect or the Xtion sensors. The software observes and comprehends the user interaction by processing the skeleton of the user. The unique analysis routines allow to not only detect simple gestures such as pushing, clicking, forming a circle or waving, but also to recognize more complicated gestures as, for instance, used in dance performances or sign language.
Five highlights of the software are that the gestures:
• can be easily trained: the user can simply train the system by recording the movement/gesture to be detected without having to write a single line of code
• are person independent: the system can be trained by one person and used by others
• are orientation independent: the system can recognize gestures even if the trained and tested gesture does not have the same orientations
• are speed independent: the system is able to recognize the gesture also if it is performed faster or slower compared to the training and is able to provide this information
• can be adjusted: the system and gesture configuration can be setup by a XML file

The software has already been used by media artists, dancers and alike to connect and to control a wide range of third party applications/software such as Max/MSP, Pure Data, VVVV, Resolume, etc. via the OSC protocol. The software is written in Processing and based on SimpleOpenNI, OpenNI and NITE.

New Version of Kinetic Space

Uncategorized |