For those that have not seen the first video about this, okreylos uses the Kinect to make a real-time 3D mapping. He uses the RGB/video feed for the main image but utilizes the depth feed to manipulate the video feed to recreate the 3D real world. Now he has released the software for download but with a bit of a caveat, you cannot build it.
I decided to release my 3D reconstruction software, even though nobody will be able to compile it yet. The problem is that it’s built on top of the Vrui VR toolkit, version 2.0, which is not released yet. But hopefully in a few days. At that point, it will definitely build on Linux, and probably on Mac OS X if you find a Mac version of the libusb-1.0 library (which I think exists).
Until then, you can at least look at some of the algorithms — which are not actually all that complicated.
The software is licensed under the GNU General Public License version 2, and can be downloaded from my R&D page: http://idav.ucdavis.edu/~okreylos/ResDev/Kinect/
Virtual reality, no more green screen 2D mapping or light-up green suit. Any model angle rendered that not is from the POV of the camera will cut you in half right now but possibly add-ons can store some static images so it will render you properly in game. Full person rendering is important since other a side scroller, you would most likely see your back — 3rd person viewing. The concept continues into any movie-mode in game where the camera is flying around you and since you can’t give the Kinect any Red Bull, you may want some static mappings. Even though that may seem like the next logical step, it has complications of it’s own. Luckily the video’s author is looking into some static mapping for mapping of static objects like an entire room. Still a work in progress but has potential, especially when others can build and work off the code…