Hi All,
I am working on a project where I am attempting to track the position of facial features in a global 3D space. However, I only have access to a single elctromagnetic motion tracker sensor which I have attached to a baseball cap which can be worn by the person I am tracking. The position of this sensor and its orientation in Euler angles (Yaw, Pitch, Roll) is returned to me. The image below demonstrates my setup.
So really, it is the position of the eyes (e2 and e1) and the nose (n) I would like to be able to track in global 3D space. For simplicity, I will use the nose as my example throughout this post.
I have used photographs of people wearing the cap to calculate an estimated spatial offset between the sensor on the top of the head and the nose. This estimated spatial offset is [0, -15, 5]. I am trying to use this offset to calculate future positions of the nose, based on where the motion tracker is and its orientation in space.
I have attempted this method but I am getting inaccurate results and I believe I am going wrong or I have not quite grasped something.
1. I get the euler of the sensor in global space. These are three rotations around the x, y and z axis.
2. I create a rotation matrix based on these three angles:
[1 0 0 0]
Rx = [0 -cos(x) -sin(x) 0]
[0 sin(x) cos(x) 0]
[0 0 0 1]
[cos(y) 0 sin(y) 0]
Ry = [0 1 0 0]
[-sin(y) 0 cos(y) 0]
[0 0 0 1]
[cos(z) -sin(z) 0 0]
Rz = [sin(z) cos(z) 0 0]
[0 0 1 0]
[0 0 0 1]
R = z.y.x
3. I create a translation matrix based on the offset:
[1 0 0 0]
T = [0 1 0 -15]
[0 0 1 5]
[0 0 0 1]
4. I then multiply the sensor's position by transition matrix T
5. Then I multiply the result of no. 4 by rotation matrix R to give me what I *thought* would be the position of the nose.
However, this is not correct. I get quite significant movement of the nose where I do not expect much. Can anyone see anything wrong with my method and/or how I could improve this?
Many thanks
I am working on a project where I am attempting to track the position of facial features in a global 3D space. However, I only have access to a single elctromagnetic motion tracker sensor which I have attached to a baseball cap which can be worn by the person I am tracking. The position of this sensor and its orientation in Euler angles (Yaw, Pitch, Roll) is returned to me. The image below demonstrates my setup.
So really, it is the position of the eyes (e2 and e1) and the nose (n) I would like to be able to track in global 3D space. For simplicity, I will use the nose as my example throughout this post.
I have used photographs of people wearing the cap to calculate an estimated spatial offset between the sensor on the top of the head and the nose. This estimated spatial offset is [0, -15, 5]. I am trying to use this offset to calculate future positions of the nose, based on where the motion tracker is and its orientation in space.
I have attempted this method but I am getting inaccurate results and I believe I am going wrong or I have not quite grasped something.
1. I get the euler of the sensor in global space. These are three rotations around the x, y and z axis.
2. I create a rotation matrix based on these three angles:
[1 0 0 0]
Rx = [0 -cos(x) -sin(x) 0]
[0 sin(x) cos(x) 0]
[0 0 0 1]
[cos(y) 0 sin(y) 0]
Ry = [0 1 0 0]
[-sin(y) 0 cos(y) 0]
[0 0 0 1]
[cos(z) -sin(z) 0 0]
Rz = [sin(z) cos(z) 0 0]
[0 0 1 0]
[0 0 0 1]
R = z.y.x
3. I create a translation matrix based on the offset:
[1 0 0 0]
T = [0 1 0 -15]
[0 0 1 5]
[0 0 0 1]
4. I then multiply the sensor's position by transition matrix T
5. Then I multiply the result of no. 4 by rotation matrix R to give me what I *thought* would be the position of the nose.
However, this is not correct. I get quite significant movement of the nose where I do not expect much. Can anyone see anything wrong with my method and/or how I could improve this?
Many thanks