«

»

May 12 2013

Print this Post

Face Tracking in Unity 3D

To be able to implement the Ortho-stereo effect I needed to find out how I can do face-tracking inside Unity. Move.me has no face tracking functionality implemented. It only delivers the camera images. Face-tracking needs to be calculated on the PC.

Face-tracking in Unity can be done with OpenCVSharp or FaceAPI. First I tried to use OpenCVSharp. I found a forum post how you can integrate OpenCVSharp into Unity. After some work I was able to use the camera image I got from Move.me for face tracking. The problem with that approach is that the face tracking function of OpenCVSharp is to slow to use it for a real-time application. Additionally, the image that is delivered from Move.me needs to be converted in a format that OpenCVSharp can use, which also takes too much time. With a camera resolution of 640×480 I was only able to track the face 3 times per second. Optimal would be to be able to track the face position 60 times per second. The output data you get from the face detection is also not optimal. The output data is a rectangle width the position, width and height in the image. This data needs to be converted to get the players position in the room relative to the camera.

The alternative was to use FaceAPI. A non-commercial version can be downloaded for free. The problem of FaceAPI is that it is not possible to use the Move.me camera image. A camera has to be connected to the PC to make face tracking with that software. Luckily, I had another PlayStation Eye camera I could use and connect directly to the PC. For this prototype it is OK to use a second camera to show the functionality. To use the data of FaceAPI inside Unity the FaceAPIStreamer can be used. I used Andy Saia’s demo as a start point for my implementation. In his demo he controls the camera position according to the data he gets from FaceAPIStreamer. The data from the FaceAPIStreamer is the position with X, Y and Z value relative to the camera and the rotation of the head.

First I used the following script of this tutorial to make an off-axis projection matrix for a single rendered image. Therefore I used this camera script. The script is based on the paper Generalized Perspective Projection of Robert Kooima. As soon as I had the off-axis camera working together with the input data of FaceAPIStreamer I adjusted the S3D camera script of StereoskopixFOV2GO V3 to make an Ortho-stereo 3D effect. As I found out while I was developing the Ortho-stereo 3D camera is that StereoskopixFOV2GO V3 does not use parallel projection for their S3D effect. It uses a simple offset of two cameras. It is also possible to enable toed-in.

My developed Ortho-stereo 3D camera uses parallel projection for the S3D effect and modifies the asymmetric view-frustum according to player’s position, which is tracked with the help FaceAPI. It is possible to adjust the image separation and the position of the zero parallax plane inside the scene. All object in front of the plane have a negative parallax (appear in front of the screen) and all objects behind the plane have a positive parallax (appear inside the screen).

The first video shows the off-axis version.

Download: 360p | 720p | iPhone

The second video shows the Ortho-stereo 3D effect. To view the S3D effect anaglyph glasses need to be used.

Download: 360p | 720p | iPhone

Permanent link to this article: http://www.markusrapp.de/face-tracking-in-unity-3d/