Avateering is the most exciting part of Kinect development and one of the most complex features to implement. Vitruvius encapsulates all of the hard work and avateering algorithms and exposes an easy-to-use API.
You only need a few rigid 3D models. Vitruvius comes with 9 models, which you can use for free in your personal or commercial applications. Also, Vitruvius includes numerous textures to choose from, so each model can appear with different clothing!
{ Reasonably, Vitruvius is featured in the official Microsoft Kinect Website and Channel9 }
If you need a different model and have a 3D artist, just provide them with the rigid skeleton and let them create a new 3D model on top of it. If you need different models and do not have a 3D artist, give us a shot and we can create a 3D model tailored to your needs.
In this tutorial, I am going to show you how you can integrate Vitruvius within your Unity project and animate the bundled male and female 3D models. Vitruvius allows a single player control a single avatar or multiple avatars. The videos below demonstrate the difference:
Prerequisites
- Kinect for XBOX v2 sensor with an adapter (or Kinect for Windows v2 sensor)
- Kinect for Windows v2 SDK
- Windows 8.1 or higher
- Visual Studio 2013 or higher
- A dedicated USB 3 port
Source Code
The source code, along with the required assemblies, is included in the following versions of Vitruvius:
Step 1: Create a new Unity project
Launch Unity and create a new project, or open an existing Kinect project. To get started, simply double-click the .unitypackage file from the fodler you downloaded.
You’ll also need to import Windows.Kinect.dll and LightBuzz.Vitruvius.dll. These are the required assemblies to use the complete power of Vitruvius.
Step 2: Import the 3D models
Open your scene file in the Editor and import the 3D models. Place them whenever you think it’s necessary for your game. I have imported the male and female 3D body models. They are named “male” and “female”, respectively.
This is how my editor looks like:
Step 3: Action!
It’s now time to write your Kinect script. You’ll need to include the following namespaces:
using Windows.Kinect;
using LightBuzz.Vitruvius;
After importing the assemblies, we’ll create the required members to open the sensor, acquire the Body data and update the avatars accordingly.
The separatedPlayers variable indicates whether a single player will control one avatar or multiple avatars.
Members
// Kinect
BodyFrameReader bodyReader;
Body[] users;
// 3D models
Model[] models;
public FBX female;
public FBX male;
// Determines whether
// one player will control all avatars, or
// whether each player will control one avatar
public bool seperatedPlayers;
Start
The Start method initializes Kinect and Vitruvius. It’s important to open the sensor and enable avateering functionality to your 3D models. The code is really simple and self-explanatory:
void Start()
{
// 1. Initialize Kinect
sensor = KinectSensor.GetDefault();
if (sensor != null)
{
bodyReader = sensor.BodyFrameSource.OpenReader();
sensor.Open();
}
// 2. Enable Avateering
Avateering.Enable();
// 3. Specify the 3D models to animate.
models = new Model[]
{
female,
male
};
// 4. Initialize each 3D model.
for (int i = 0; i < models.Length; i++)
{
models[i].Initialize();
}
}
Update
Now, it’s time for the cool part! To see your avatars animated, you simply have to loop through the tracked bodies and update the corresponding avatar. Vitruvius does this with just one line of code.
void Update()
{
if (bodyReader != null)
{
using (var frame = bodyReader.AcquireLatestFrame())
{
if (frame != null)
{
users = frame.Bodies().
Where(b => b.IsTracked).ToArray();
if (seperatedPlayers)
{
// Each user controls a different avatar.
for (int index = 0; index < users.Length; index++)
{
Model model = models[index];
Body user = users[index];
// Yes, this line does ALL of the hard work for you.
Avateering.Update(model, user);
}
}
else if (users.Length > 0)
{
// A single user controls all of the avatars.
for (int index = 0; index < models.Length; index++)
{
Model model = models[index];
Body user = users[0];
// Yes, this line does ALL of the hard work for you.
Avateering.Update(model, user);
}
}
}
}
}
}
Dispose
Finally, do not forget to dispose Kinect and clean any resources.
public void Dispose()
{
// 1. Dispose Kinect
if (bodyReader != null)
{
bodyReader.Dispose();
bodyReader = null;
}
if (sensor != null)
{
if (sensor.IsOpen)
{
sensor.Close();
}
sensor = null;
}
// 2. Dispose 3D models
for (int i = 0; i < models.Length; i++)
{
models[i].Dispose();
}
// 3. Disable avateering
Avateering.Disable();
}
This is it! You have just developed an avateering app using Kinect v2!
BONUS – New 3D models
Vitruvius is bundled with nine 3D models you can use in your commercial projects. We are adding new 3D models all the time. Our models are perfectly rigged and work with Kinect just fine. Each model is coming in multiple textures. to modify a texture, simply open the Samples/Models/FBX folder and pick the 3D models and textures of your choice.
The following models are included:
- Female Casual – a beautiful female model with various hair colors and clothes.
- Male Casual – a male model with various clothes.
- Female Athletic – a female model with sports clothing.
- Male Athletic – a male model with sports clothing.
- Kid – a 3D model of a kid.
- Skeleton – a fantastic model of a human skeleton with bones and skull!
- Longsleeve – a shirt with long sleeves and various colors.
- Tshirt – a simple t-shirt model with various colors.
- Pants – a pair of trousers with various styles (jeans, black, formal).
Bodies
Clothes
This is a video of the skeleton model in action:
So, what will you do with Vitruvius? Let me know in the comments below!
hi i was trying to add my own model for avateering. But when i run it my new model is not copying the users gestures. The error is it is saying the model is not initialized. Wait for ur help
Hi Tashi. You simply need to call the method yourModel.Initialize() within the Start() method.
Hi Vangos,
How can i port unity game which uses your libraries for windows store 8.1 desktop version ? it was showing some conflicts to kinect libraries
Regards,
Usman
Hi Usman. Thanks for your comment. Does this tutorial help?
http://vitruviuskinect.com/vitruvius-kinect-unity-windows-store/
Let me know what types of conflicts you are getting.
hi vangos,
i am working on fitting room project for retail shop.can you tell me how i can add clothes of different size???
Hello, Rami. Thank you for your message. If you already have a 3D model of a cloth (or if you are using one of the existing clothes), you can change the Color Scale Factor (or the Depth Scale Factor) and increase or decrease the size of the 3D model. This way, you’ll be able to have different sizes. The Scale Factor is a property of the AvatarCloth class.
dear
i tried to import 3dmodel from 3dmax to unity vitruvius fitting room but i can not control it please let me know how to configer it
Hello, Bader. Thank you for your message. I assume you have already added your .fbx model in your Unity scene. In your Hierarchy window, select your model and click “Add Component”. Then, add the “AvatarCloth.cs” script.
In your C# code, you need to call the following methods:
void Start()
{
// Initialize the sensor, like the Samples.
Avateering.Enable();
model.Initialize();
}
void Update()
{
// Get the Body object, like the Samples.
// Then, animate the 3D model.
Avateering.Update(model, body);
// Then, update the position of the 3D model in the 2D Color space:
var position = body.Joints[model.Pivot].Position.ToPoint(Visualization.Color);
if (!float.IsInfinity(position.x) && !float.IsInfinity(position.y))
{
frameView.SetPositionOnFrame(ref position);
model.SetBonePosition(model.Pivot, position);
var distance = model.JointInfos[(int)model.Pivot].RawPosition.z;
if (distance != 0)
{
model.Body.transform.localScale = model.ScaleOrigin * (model.colorScaleFactor / distance) * frameView.ViewScale;
}
}
}
void OnApplicationQuit()
{
Avateering.Disable();
model.Dispose();
}
Consider checking the FittingRoom sample, too. You could simply drag-and-drop your model in your scene.
Feel free to contact us if you need more information or screenshots.
dear
thank you for help but could you send to me how to control FBX model that i add it to FittingRoom sample because i still have problem with code
is the code below correct
using UnityEngine;
using Windows.Kinect;
using LightBuzz.Vitruvius;
using LightBuzz.Vitruvius.Avateering;
public class body111 : MonoBehaviour {
// Use this for initialization
void Start () {
// Initialize the sensor, like the Samples.
Avateering.Enable();
model.Initialize();
}
// Update is called once per frame
void Update () {
// Get the Body object, like the Samples.
// Then, animate the 3D model.
Avateering.Update(model, body);
// Then, update the position of the 3D model in the 2D Color space:
var position = body.Joints[model.Pivot].Position.ToPoint(Visualization.Color);
if (!float.IsInfinity(position.x) && !float.IsInfinity(position.y))
{
frameView.SetPositionOnFrame(ref position);
model.SetBonePosition(model.Pivot, position);
var distance = model.JointInfos[(int)model.Pivot].RawPosition.z;
if (distance != 0)
{
model.Body.transform.localScale = model.ScaleOrigin * (model.colorScaleFactor / distance) * frameView.ViewScale;
}
}
}
void OnApplicationQuit()
{
Avateering.Disable();
model.Dispose();
}
}
Hello. You also need to include the code in the FittingRoomSample.cs file. Can you see that file?
[…] need to map specific body parts to these joint rotations. This post is using the male model from Vitruvius avateering tools, but you are welcome to use any properly rigged […]
[…] need to map specific body parts to these joint rotations. This post is using the male model from Vitruvius avateering tools, but you are welcome to use any properly rigged […]
[…] need to map specific body parts to these joint rotations. This post is using the male model from Vitruvius avateering tools, but you are welcome to use any properly rigged […]
Hi Vangos.I want to add a virtual 3d model of hair. How can i do this? Is there any link to help me?
Hello Amin. You can add the 3D model of the hair as a child element of the Head of the avatar. This way, the hair will follow the position and rotation of the head. You can also set your own offset. Please, use our support page, so we can send you more details and screenshots.
Hi,
I’m working on my Master Thesis and would be great if I could use your models in my project.
Is there any free student version of the product?
Kind regards,
Marko
Hi Marko. We are offering a discounted Academic version that includes all of the 3D models and samples. It’s a lifetime license for non-commercial use.
Hi Vangos, our company offers Body tracking SDK for Android computer Orbbec Persee (with 3D sensor) http://www.3divi.com/products/nuitrack-sdk-for-persee and our clients ask -can your solution be ported to Orbbec Persee Android sensor? Looking forward for your reply. Thanks
Hello, Dmitry. Vitruvius is compatible with Kinect v2. Internally, our framework is capable of animating any 3D model, but you’ll need a custom version to support different sensors. Please, contact our team in case you would like to discuss about a custom version of Vitruvius.
Hi Vangos, the script avataringSample does’t recognise my 3d model and debug this error:
NullReferenceException: Object reference not set to an instance of an object
LightBuzz.Vitruvius.Avateering.Pose.ReadPose (LightBuzz.Vitruvius.Avateering.Model model)
LightBuzz.Vitruvius.Avateering.FBX.InitializeBody ()
LightBuzz.Vitruvius.Avateering.Model.Initialize ()
AvateeringSample.Awake () (at Assets/LightBuzz.Vitruvius/Scripts/Samples/Avateering Sample/AvateeringSample.cs:35)
Hello Agustin. Your model needs to be a humanoid avatar and have the Unity skeleton.
My model have an humanoid avatra and skeleton :/
Then, you need to reference the model in your code and call the Initialize() method.
// Start
myModel.Initianlize();
// Update
Avateering.Update(myModel, body);
You can also check the Avateering sample and replace one of the existing models with your own one.
already try all that u.u, when i replace the models with mine it says error
Did you add your model to the avatar array in the Editor, too? You can send your code to our Support team, so we can have a look.
Hi Vangos, actually I wanna know do you have any tutorial (in your toturial seris) or any component (in any of your versions of Vitruvius) to use cloth physics for avatar in Unity3D?, For example, for avatar clothing, the cloths affected by joint movements and the appearance of the cloths changed.
Hello Rim. Thanks for your message. Cloth animations are strongly related to the 3D model of the character and may be way different from one model to another. You need to edit the 3D models using 3D design software and Unity3D (or Unreal Engine or DirectX).
I would recommend the following tutorial:
Thanks Vangos.
Dear Vangos Pterneas
We search a EndUser Application, a “Player” that makes possible to import a 3D Figure and present this in a Kiosk-Modus, fullscreen on a Display. This 3D-Figure we need animated with the Kinect Sensor. Your Appliaction in kind is exact that what we need. But we don’t be Application-Developer, we need a Installer-Programm with a GUI. Can you help us?
kind regards
Beat
Hello. Vitruvius is a tool for software developers. In case you would like our team to develop a GUI project for you, we would be happy to help you. Please, send us a message using our contact form and we’ll get back to you with an estimation.
Hi,
Is it possible to use Vitruvius to record & save Avatar data as animations? And then to apply the animations to a pre-created rig. Basically, I would like to use the Kinect as a MoCap device!
Any tips on how to do that would be appreciated!
Thank you.
Hello Prateek. Using Vitruvius, you can record the Body objects and store the joint positions and orientations in a custom binary files. You can then read the binary files and apply animations to a 3D humanoid model. Our Unity samples include a demo that does exactly that 🙂
Hello Vangos,
I added a new model to the avateering example, it worked but all of the joints are inverse
for example when my hands are down the model hands are up
I thought that i should reverse the (x , y) values to make them (-x ,-y) but i couldn’t because the class (Body) is sealed… Can you please tell me how can i fix the problem?
this is my model “https://www.mixamo.com/#/?page=1&query=malcolm&type=Character”
Hello Yara. This is simple to solve: when you first drag-and-drop your model in Unity, you need to set its Y Rotation value to 180 degrees. This would eliminate the problem.
If you need to keep the default rotation of the model, you can rotate the model in your C# code after you call the model.Initialize() method.
The upcoming version of Vitruvius will automatically adjust the rotation.
Hi Vangos,
Thanks for this great product. I just bought the SDK and tried to deploy it from within Unity (Windows x86). When I hit build, I get the following error:
Plugin ‘KinectUnityAddin.dll’ is used from several locations:
Assets/Plugins/x86/KinectUnityAddin.dll would be copied to /KinectUnityAddin.dll
Assets/Plugins/x86_64/KinectUnityAddin.dll would be copied to /KinectUnityAddin.dll
Do you know how to fix this?
Best,
K e v i n
Hi Kevin and thanks for using Vitruvius! It’s really easy to solve this problem:
First, within the Unity Editor, find the Assets/Plugins/x86/KinectUnityAddin.dll. In the Inspector window, ensure that ONLY the “x86” target is selected. Click Apply.
Then, locate the Assets/Plugins/x86_64/KinectUnityAddin.dll and select ONLY the “x86_64” target. Click Apply.
Ensure your project is working and then build it again. Let me know if that worked for you!
In case you are still facing problems with the build process, please send an email to our Support Team and we’ll get back to you asap.
Hi Vangos, thank you for the quick reply! We got it running with your samples, however, got stuck with using our own avatar. Is there is step by step demo on how to make our own avatar moving from Kinect input (analogue to your avatars)? At the moment, it is a time-consuming trial-and-error process for us (the tutorial you provided above is kinda hard for beginners, e.g. where are the source code files located exactly?).
Thanks a lot!
K e v i n
Hi Kevin. You can definitely animate other avatars. Are you having trouble with the skeleton (e.g. not moving properly) or with the actual placement of the 3D models into your scene?
Hi Vangos, Thanks for your reply. I got this resolved – our own avatar is moving according to the Kinect input. Final step would be include facial expressions / mimic. The HD face is working fine, but is there a way to make our own avatar moving accordingly?
Best,
K e v i n
Awesome 🙂
The HD face has over 1000 vertices. We have included a sample Face model with all of the required vertices in the Face Mesh Sample scene. You could use that model as your base.
Hi Vangos, I am on the Face Mesh Sample right now and trying to replace your default face with our own fbx face. Do you have a tutorial that elaborates on the steps that are to be taken to get it up and running?
Best,
K e v i n
Your 3D artist can use the Face model to see the vertices and then decorate it with the desired visual elements. Other than that, all you need to do is call Avateering.Update() and provide the new model as a parameter.
Ok, will try. What about the face points sample? My understanding is that it has much less points but should work the same way.
Correct. The face sample is displaying fewer points, but it’s customizable and you can display as many points as you like 🙂
Do you have another face that we can use to try the replacement of the current default face?
We only distribute the bundled Face model.
In Unity can you use the Kinect to scale a model so that it matches the dimensions of a person. So the model would match the persons height, it would have the same arm length, the same sized legs, etc…
Also we don’t need to be actively tracking the person we just need to scale and constraint the model to the new scale once.
Is this possible?
Hello Alex and thank you for using Vitruvius! You can definitely scale a 3D model to match the dimensions of a person. However, you will need to design an adjustable 3D model. Using Vitruvius, you can easily detect the length of the human bones. For example, the following code will give you the length of the forearm:
var elbow = body.Joints[JointType.ElbowLeft].Position;
var wrist = body.Joints[JointType.WristLeft].Position;
var length = elbow.Length(wrist);
In a similar way, you can measure any other bone.
You can apply that measurement to your 3D model. Your 3D model should be designed with specific blendshapes in the proper bones. The blendshapes will let you modify a portion of the model (e.g. its forearm) in real-time.
Hello Vangos ,
I want to track down multiple bodies for fitting at the same time.How do I do that?
thank you.
Hello Shey. This is an example of tracking 2 bodies at the same time, while doing avateering for both of them:
Frame frame = sensor.UpdateFrame();
if (frame != null)
{
Body[] bodies = frame.BodyData;
Body body1 = null;
Body body2 = null;
foreach (Body b in bodies)
{
if (b != null ∓& b.IsTracked)
{
if (body1 == null) body1 = b;
else body2 = b;
}
}
if (body1 != null)
{
model1.DoAvateering(body1);
}
if (body2 != null)
{
model2.DoAvateering(body2);
}
}
Hello Vangos ,
I’m working on the fitting room project. Can you tell me which parameters I should change to change the size of clothes in the latest version of vitruvian?
Thanks.
Hello Shey. You could modify the GameObject’s scale value proportionately to the user’s height.
I bought the Premium Version, and I’m trying to do a customized Avataring example, I have gone through the given source code. I have completed the modeling and added a script.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using Windows.Kinect;
using LightBuzz.Vitruvius;
using LightBuzz.Vitruvius.Avateering;
using System.Linq;
public class humanoid : MonoBehaviour
{
// Kinect
BodyFrameReader bodyReader;
Body[] users;
// 3D models
Model[] models;
public FBX humanoid1;
// Determines whether
// one player will control all avatars, or
// whether each player will control one avatar
public bool seperatedPlayers;
private KinectSensor sensor;
// Start is called before the first frame update
void Start()
{
// 1. Initialize Kinect
sensor = KinectSensor.GetDefault();
if (sensor != null)
{
bodyReader = sensor.BodyFrameSource.OpenReader();
sensor.Open();
}
// 2. Enable Avateering
Avateering.Enable();
// 3. Specify the 3D models to animate.
models = new Model[]
{
humanoid1
};
// 4. Initialize each 3D model.
for (int i = 0; i b.IsTracked).ToArray();
if (seperatedPlayers)
{
// Each user controls a different avatar.
for (int index = 0; index 0)
{
// A single user controls all of the avatars.
for (int index = 0; index < models.Length; index++)
{
Model model = models[index];
Body user = users[0];
// Yes, this line does ALL of the hard work for you.
Avateering.Update(model, user);
}
}
}
}
}
}
public void Dispose()
{
// 1. Dispose Kinect
if (bodyReader != null)
{
bodyReader.Dispose();
bodyReader = null;
}
if (sensor != null)
{
if (sensor.IsOpen)
{
sensor.Close();
}
sensor = null;
}
// 2. Dispose 3D models
for (int i = 0; i < models.Length; i++)
{
models[i].Dispose();
}
// 3. Disable avateering
Avateering.Disable();
}
}
But Unity shows this error.
Assembly 'Library/ScriptAssemblies/Assembly-CSharp.dll' will not be loaded due to errors:
Reference has errors 'LightBuzz.Vitruvius'.
Assembly 'Assets/Plugins/x86_64/LightBuzz.Vitruvius.dll' will not be loaded due to errors:
Unable to resolve reference 'LightBuzz.Vitruvius.Unity'. Is the assembly missing or incompatible with the current platform?
Please tell me how to fix this issue
Hello Mahesh. Thank you for using Vitruvius! Please, ensure that the DLL files of your project are properly targeting x86 and x86_64 architectures: the DLL files that are located in the “x86” folder should only target the “x86” architecture. The DLL files located in the “x86_64” folder should only target the “x86_64” architecture.
If you are using Vitruvius 5+, the code would be a little different. You can refer to the avateering sample + source code included in the Unity package!
Hi Vangos,
I want to limit certain joint rotations, is it possible to integrate custom constraints or add filtering? Is the source dode included or is there any API or possiblility to further control the avateering process?
Thanks!
Hello Nacho. Sure, you can add your own constraints and update the Joints dictionary accordingly. Which version of Vitruvius are you using?
I haven’t used it yet, I’m researching alternatives. So after calling Avateering.Update() I could modify joints positions and rotations without messing with the smoothing/filtering or anything else?
Exactly. You can modify the positions/orientations of the joints as you wish.
Great, thanks for the quick response! Is there any demo for the avateering funcionality available?
The avateering is part of the Academic and Premium packages. You can check the videos at the start of this article to see how avateering works, though.
Hi,
thank you for this nice post. I want to ask if this package also support Kinect v1?
Hi Yang. Kinect v1 is not supported by Vitruvius. However, you can check an older open-source version for Kinect v1 we developed a few years ago, hosted on GitHub.