Having estimated the orientation of your motion sensors, the most difficult part (both mathematically and implementation-wise) is done. However, we still don’t know anything about the actual kinematics of the human body yet. To be able to compute this information in the post processing, further measures (that are ideally simple, ubiquitous and universally applicable) should already be taken during data collection.

Estimating Initial Posture

The previous data fusion gives us information on the orientation of every sensor as angular displacement per time. This displacement is relative to the orientation of the previous time frame, or on the quaternion defined as start orientation at the beginning of a data take. Of course, it is possible to choose some arbitrary standard values (like the quaternion [1 0 0 0]) for that task: granting much influence to the measurement data, the fusion filter would continuously get corrected and approach the actual state. But that could take a long time (if it actually happens at all!). So how else can you obtain all the information you want and need for further analysis? Most simply, just pre-estimate the initial orientation of every sensor. This makes estimate and measurement data converge quickly. Moreover, it has another positive side effect: trials become comparable, even under different capture conditions or places.

The estimation of body kinematics should be independent of any reference start position or calibration movement to avoid distraction of the athlete. Under this consideration, it is reasonable to use a method that determines sensor orientation from the accelerometer and magnetometer readings only, such as the QUaternion ESTimator (QUEST) algorithm [5]. In this algorithm, attitude is represented as a combination of the rotational displacements around the three principal axes in the global frame.

Generally, a sensor can take up any arbitrary orientation by consecutive rotations around one or more of its axes. These rotations are described by trigonometric correlations within the earth frame and make the local accelerometer and magnetometer readings change accordingly. This in return means that one can also derive rotational translations from the sensor readings. Magnetic field measurements should only be used to determine the estimate of yaw to ensure that magnetic distortions do not affect the estimates of roll and pitch.

initial sensor orientation

A principal sequence for the determination of any angular displacement in Euler angles is to first rotate the sensor about its z-axis by the yaw, then about its y-axis by the pitch and finally about its x-axis by the roll. This order should be maintained for computation of the initial sensor orientation in the earth coordinate frame. First, three individual quaternions are computed for the heading (or z) rotation q_h, the pitch (or y)  rotation q_p and the roll (or x) rotation q_r. Next, the quaternions are multiplied to build the final initial output quaternion ^S_Eq_{i} of every sensor as

^S_Eq_{i} = q_h \times q_p \times q_r.

^S_Eq_{i} can then lastly be used as start orientation in the sensor orientation estimation.

The algorithm is very quick and well suited to estimate the orientation of a static or slow-moving rigid body with no or little external acceleration. However, to assure high accuracy of the computed orientation, no external forces should superimpose the accelerometer’s measurement of gravity. This means that the sensor data should be collected under static conditions. Often, motion performances already succeed a period of motionless posture (for example as a phase of concentration or the waiting for an official start signal). Such time interval is sufficient for determination of the initial pose and can be used without imposing any extra requirements on the performing athlete.

Determining Body Segment Orientations

Ideally but not practical, the sensor could be directly placed on the bones of an athlete and the sensor data be expected to represent the mounted body segment without any further processing. However in the real world, this is not possible. Varying anthropometrics of athletes and the manual sensor placement create a displacement between the sensor and the real bone structure which has to be taken into account. A reasonable strategy here is to virtually align the sensor frame to the direction of the bone.  For this, two calibration measurements are necessary that have to be executed after sensor placement and before starting of the main motion data collection.

sensor displacement

The first measurement is a static calibration with the athlete standing in the N-Pose. In this upright position, all relevant body segments (torso, legs, arms) are perpendicular to the ground surface. This means that the longitudinal axes of the bones are parallel to the direction of gravity and the vertical axis direction of the defined global coordinate system. The displacement between every sensor’s longitudinal axis to the global vertical axis d_v is then given by the differences in the normalized acceleration vector to the unit vector.

A rotational movement around one of the remaining segment axes constitutes the second calibration measurement. For every body part this axis is chosen individually, favoring the axis that enables a more accurate and precise rotation. So, the legs are for example swung around the lateral axis and the torso bent around the transverse axis. The deviation between sensor axis and the principal segment rotation axis d_r can then be determined from the direction vector of the point of maximal angular velocity at the rotation turning point.

Using the newly determined direction vectors, the sensor frame is completed and stabilized in two cross product computations. The first one returns the missing sensor axis d_p as product of d_v and d_r by d_p = d_r\times\!d_v. The second one refines d_r as product of d_v and d_p to d_r' = d_v\times\!d_p.  The axes then build a rotation matrix rM that depicts the sensor displacement.  For a rotational calibration movement around the y-axis, this is for example defined by

rM= [d_p\, d_r'\, d_v].

The displacement information for every sensor is then finally added to the basic sensor orientation to provide the actual segment kinematics.

Framework and Resulting Kinematics

The two previous methods build a first framework for the determination of body kinematics from your input performance data (meaning the previously determined segment orientation). For applications that base on the analysis of segment orientations, the resulting data are already sufficient. For visual analysis, you can now add the next computation for the provision of body joint positions.

sensor processing simple