Right now the documentations is only available in a lovely Word format. I wanted to read it on my iPad so I went ahead and converted the Kinect SDK in ePub format. Should work on a Nook too.
I have improved the framerate quite a lot since last time. It is almost ready to be used for production. At the moment I am grabbing the positions of the joints, next step is to grab the rotations and apply the tracking data to a skinned character. Stay tuned!
NAVI (Navigational Aids for the Visually Impaired) is a student project aiming at improving indoor navigation for visually impaired by leveraging the Microsoft Kinect camera, a vibrotactile waistbelt and markers from the AR-Toolkit.
While the “white cane” is a good tool to improve navigation for visually impaired, it has certain drawbacks such as a small radius or that it just detects objects that are on the ground (during typical use).
We wanted to augment the visually impaired person’s impression of a room or building by providing vibro-tactile feedback that reproduces the room’s layout.
For this, depth information from the Kinect is mapped by our software onto three pairs of Arduino LilyPad vibration motors located at the left, center and right of the waist. These pairs of vibration motors are hot glued into a fabric waist belt and connected to an Arduino 2009 board. To increase the impact of the vibration motor they were put into the cap of a plastic bottle. The Arduino in the waist belt is connected via usb to a laptop that was mounted onto a special backpack-construction, which has holes for cables and fan.
To support point-to-point navigation usually a seeing-eye dog is used. This dog however must be trained for certain routes, costs a lot of money and gets tired soon. In certain research projects GPS is used to provide this point-to-point navigation, however GPS is not applicable for indoor scenarios.
We wanted to utilize the rgb camera of the Kinect, so we put several markers of the AR-Toolkit on the walls and doors of our building thereby modeling a certain route from one room to another. The markers are tracked continuously all along the way and our software provides synthesized auditory navigation instructions for the person. These navigation instructions vary based on the distance of the person to the marker (which we get from Kinect’s depth camera). So for example, if you walk towards a door the output will be “Door ahead in 3″, “2″, “1″, “pull the door” where each part of the information depends on the distance to the marker on the door.
The software is written with C#/.NET. We used the MangedOpenNI (https://github.com/kobush/ManagedOpenNI) wrapper for the Kinect and the managed wrapper of the ARToolkitPlus (http://code.google.com/p/comp134artd) for marker tracking. Voice synthesis is done using Microsoft’s Speech API (http://msdn.microsoft.com/en-us/speech/default). All input streams are glued together using Reactive Extension for .NET (http://msdn.microsoft.com/en-us/devlabs/ee794896).
I recently updated the Windows Phone 7 application OM Meditation Timer to fix a small bug that was happening around the mid point notification sound. If you’re still having any issues with getting this notification to play you may need to uninstall the application and then re-install. Don’t worry, if you’ve paid for the app you can re-install from the Microsoft Marketplace without being charged.
Don’t forget if you’re using the trial version you only get a small amount of the functionality of the app.