Kinect understands the 3D space using an infrared sensor and a depth processor. The depth processor can measure the distance between the physical points and the device. Kinect SDK uses a structure named CameraSpacePoint to represent a point in the physical 3D space. A CameraSpacePoint is a set of three properties [X, Y, Z]. X is the distance in the horizontal axis, Y is the distance in the vertical axis, and Z is the distance (depth) between the point and the plane of the sensor.
The values are measured in meters. So, the CameraSpacePoint [1.5, 2.0, 4.7] is located 1.5 meters from the left, 2.0 meters from the top, and 4.7 meters from the sensor.
1 meter = 3.28 feet
1 meter = 39.37 inches
This concept is illustrated in the figure below:
However, when we develop a Kinect app, we use computer monitors. Somehow, we have to project the 3D points on the 2D screen space. There are 2 screen-spaces:
- Color Space: 1920×1080 pixels
- Depth/Infrared Space: 512×424 pixels
Obviously, points in the 2D space only have X and Y values, measured in pixels.
So, we have to convert meters to pixels! How’s that possible? I have thoroughly explained this process in my blog post Understanding Kinect Coordinate Mapping.
Coordinate Mapping is the process of converting between the 3D and the 2D space.
3D space
Using Vitruvius, Coordinate Mapping is as simple as typing one line of C# code. Let’s have a look at an example:
var position = body.Joints[JointType.Head].Position;
This is how we find the position of the Head joint using the official Microsoft SDK. The point3D variable is a [X, Y, Z] combination. It indicates where the head of the person is located.
Projecting the 3D point to the 2D space is accomplished using Vitruvius’ ToPoint method. That method takes a Visualization enumeration as a parameter. To use the ToPoint method, you first need to import Vitruvius in your project:
using LightBuzz.Vitruvius;
2D Color Space (1920×1080)
This is how to convert the 3D point to a 2D point in the 1920×1080 Color Space:
var pointColor = position.ToPoint(Visualization.Color);
var left = pointColor.X;
var top = pointColor.Y;
2D Depth Space (512×424)
Similarly, you can convert the 3D point to a 2D point in the 512×424 Depth Space:
var pointDepth = position.ToPoint(Visualization.Depth);
var left = pointDepth.X;
var top = pointDepth.Y;
2D Infrared Space (512×424)
Converting to the Infrared Space is identical to the Depth Space:
var pointInfrared = position.ToPoint(Visualization.Infrared);
var left = pointInfrared.X;
var top = pointInfrared.Y;
If you are using Unity, there is one additional extension method that converts a 2D point to a 2D Vector:
var vector = position.ToPoint(Visualization.Color).ToVector();
Using a different Coordinate Mapper
In case you are using multiple Kinect sensors, you can still work with Vitruvius! You simply have to specify which sensor the method should use:
var pointColor = position.ToPoint(Visualization.Color, sensor1.CoordinateMapper);
var pointDepth = position.ToPoint(Visualization.Depth, sensor2.CoordinateMapper);
var pointInfrared = position.ToPoint(Visualization.Infrared, sensor3.CoordinateMapper);
This is it. You can now project any 3D point to any 2D space!
Actually, Vitruvius extension methods could be applied to any point, not just body points. For example, you can specify your own 3D point, like below:
var point3D = new CameraSpacePoint
{
X = 0.8f,
Y = 1.4f,
Z = 3.2f
};
var point2D = point3D.ToPoint(Visualization.Color);
Be cautious, though: not every 3D point corresponds to a 2D point! Why? Because there may be nothing in that position. In our example, if there is nothing in 3.2 meters from the sensor, a dummy point will be generated. To avoid any confusion, remember to check whether the 2D points have valid X and Y values, like below:
if (!float.IsInfinity(point2D.X) && !float.IsInfinity(point2D.Y))
{
// Do your magic.
}
// Otherwise, it's not a valid point.
The ToPoint method is a powerful weapon that will save you a ton of time.
You can access it by downloading Vitruvius.
Download Vitruvius
[…] Hint: in case you need to measure distances of points that do not belong to the human body, you can use Coordinate Mapping. […]
[…] Hint: in case you need to measure distances of points that do not belong to the human body, you can use Coordinate Mapping. […]
[…] Hint: in case you need to measure distances of points that do not belong to the human body, you can use Vitruvius Coordinate Mapping. […]
Hello sir! I am a student in De La Salle University located in the Philippines. I am interested in making a Kinect-based project. My project is “Vital Statistics Acquisition using Kinect”. The first thing i want to do is to be able to measure the waistline of a person. Can you please help me if you have any reference material or sample algorithm that i can use for my project. Thank you very much sir!
Hello Dante. This is not directly related to Vitruvius, however, you could do the following:
1) Find the BodyIndex points that belong to the human body.
2) Find the points closer to the SpineBase joint.
3) Detect the left and right points.
4) Measure the distance between them using the Length() extension method of Vitruvius.
Hope that helps you with your problem.
Thank you so much! I look forward in using your software. I haven’t started yet due to prior projects but i’m excited to try it out!
Excited to hear 🙂
Hi, we’re interested in getting a list of all the 3d points the kinect can measure.
We know how to do this for the body joints as you’ve shown, and we understand that we can see if a specified 3d point exists using https://vitruviuskinect.com/coordinate-mapping/
We can get a list of such points in other APIs like the Kinect v2 plugin for Processing(v3)
(in init) kinect.enableColorPointCloud(true);
(in draw) //obtain the point cloud positions
FloatBuffer pointCloudBuffer = kinect.getPointCloudColorPos();
//get the color for each point of the cloud Points
FloatBuffer colorBuffer = kinect.getColorChannelBuffer();
Is something like this possible using your libraries?
Hello, Daniel. Using Vitruvius, you can transform a set of coordinates to another coordinate system. However, Vitruvius is not providing a method similar to what you have requested here.
Hey, is it possible to map a 3D point, to a picture with a different resolution, than 1920 x 1080 ? (for example if i got the 3D point with kinect, and at the same time took a picture of the scene, with a 1080×720 resolution webcam. Can i map the 3D point to this image? )
That would not be possible. Kinect can only map the points between its own depth and RGB cameras.
I couldn’t find an example about Fitting Room wpf application inside the Premium Vinson of the Vitruvius, Please send me as example code to add 2D virtual cloths to skeleton. Thanks
Hello Mahesh. A virtual fitting room would require 3D capabilities. WPF does not provide so advanced 3D capabilities. This is why the virtual fitting room is available in the Unity3D samples. Unity3D is also using the C# language and lets you develop great 3D experiences much easier.