Even if you would just want to immerse yourself into data mining for analysis of your motion performance – there’s a high chance you will first need to deal with signal processing. Of course possibilities exist to retrieve and use information from the raw sensor data. For kinematic analysis however, more significant and multidimensional data are required that have to be explicitly computed. Here, one of the main processing steps to implement is a method that provides an orientation estimate for every motion sensing device mounted on the athlete’s body.

Basic Estimation Principle

There are several possibilities to estimate sensor orientation from the raw measurement data. The simplest (hence easiest to implement) and computationally cheapest (meaning fastest) one would be to just integrate the angular velocities. To get familiar with this concept, just think about fundamental physics that describe velocity as positional displacement of an object over time. Inverting this relation, integration of an object’s velocity yields its change in position, respectively the object’s position at the given time step. In the same way, it should then also be possible to determine orientation  – which is nothing else than angular position – by integrating the angular velocities.

Mathematical definition of the data integration process varies in dependence of the chosen data representation. Quaternion integration requires more data computations than integration of the angular velocities along each individual Euler axis, but clearly makes up for this additional effort by generating a singularity-free data output. Its algebraic description is based on two primary variables: the quaternion ^S_E\hat{q} in the earth frame E relative to the sensor frame S and the three-dimensional angular velocity vector ^s\omega.
As a first action of every time step t, ^s\omega is extended to four dimensions by adding a zero vector element. Next, its angular change is determined from the quaternion derivative ^S_E\dot{q} using the angular velocities and the orientation of the previous time step by

^S_E\dot{q}_{\omega,t} = \frac{1}{2}^S_E\hat{q}_{est,t-1} \otimes ^s\omega_t.

Lastly, the quaternion derivative is utilized to numerically integrate the output quaternion estimate ^S_Eq_{\omega}:

^S_Eq_{\omega,t} = ^S_E\hat{q}_{est,t-1} + ^S_E\dot{q}_{\omega,t}\Delta t.

Unfortunately, integration is only perfectly accurate in theory. In practice, the measured angular velocity data always contains an unknown error that is caused by sensor bias, white background noise of the sensors and temperature differences. An integration estimate can be precise over a short period of time when this measurement error is removed as offset before computation. However, the residual error sums up and accuracy of the integrated data estimate deteriorates over time. In other words, the orientation estimate veers away from the actual orientation. This accuracy problem is broadly known and referred to as drift. Without any further constraints that regulate the drift problem, a simple integration estimator is therefore not suitable for long term orientation determination.

Fusion Filter

To avoid drift, it is common to use more complex filters that fuse the data from the gyroscopes with the remaining measurement (observation) data from accelerometer and magnetometer. The idea is that this sensor combination creates a faster, more efficient and nearly drift free orientation estimate as it could not be obtained with either one of them exclusively.

Angular velocities are not susceptible to external forces and hence more accurate than observation data during highly-dynamic periods with much external acceleration. During phases of low external acceleration on the other hand, accelerometer and magnetometer data can provide a drift free orientation estimate. For this reason, integration of the angular velocity usually builds the foundation and basic computation of the filter design. Long-term inaccuracies evoked by drift are then refined with the help of the observation vectors either before or after the integration step.

Sample Filter 1: Complementary Filter with Gradient Descent Optimization after Magdwick [2]

This filter merges the data of the integrated orientation (quaternion ^S_Eq_{\omega,t}) with a second orientation estimate obtained from the sensor’s measurements vectors (quaternion ^S_Eq_{\Delta,t}). The correcting measurement orientation is determined in a Gradient Descent Optimization step, whereas its corresponding minimization problem consists of finding a unique orientation that aligns the measured field directions in the sensor frame with reference field directions in the earth frame. These reference directions are predefined using constraints on the direction of gravity and global magnetic field.

For every time step t,  the two orientations ^S_Eq_{\omega,t} and ^S_Eq_{\Delta,t} are determined and then fused to the final orientation with a pre-defined filter gain \gamma_t.

complementary filter orientation estimateTo find the detailed information of this filter as well as sample source code, visit the filter’s original page.

Sample Filter 2: Complementary Filter with Cosine Matrices after Mahony [3]

In this filter, it is not necessary to compute an additional vector quaternion. Instead, drift in the orientation estimate is directly determined and removed using information from the acceleration and magnetic field vectors. This fundamental correction of error is based on the idea that angular changes can be represented in a rotation matrix. Proportional, sensor induced error feedback e_t is obtained by the cross product between estimated field directions derived from the orientation estimate of the previous sample (\hat{v}_t, \hat{w}_t) and the sensor field measurements (^S\hat{a}_t, ^S\hat{m}_t). The error accumulating with integration over time is furthermore considered by the integration error correction term  (e_{int,t}).

Weighing the error values by two filter gains, the angular velocity data is refined. A final orientation is then created by integration of the new angular velocities.

complementary filter orientation estimate

Sample Filter 3: Pseudo-linear Kalman Filter after Marins [4]

The Kalman Filter (KF) is the most common method to estimate sensor orientation from the different inertial sensor data types and can be implemented in many ways. As a basic principle, a new value is predicted on the base of the current angular velocity data from a previous or initial guess. This prediction is then compared to the observations of the correcting sensor measurement data, and its credibility dynamically rated using information about measurement noise and inaccuracies.

The present KF variates this general scheme: an orientation (quaternion ^S_Eq_{\Delta,t}) is computed from the measurement vectors before the main algorithmic cycle. Together with the angular velocity vector ^S\omega_t, this estimate then serves as input to the Kalman Filter with filter models F_t, H_t and noise models w_t and v_t. Similar as the input, the output signal of the Kalman Filter constitutes of three angular velocities and an orientation estimate. Therefore, dimensionality is reduced and the filter can be treated as linear. As a result, the main computation cycle is simplified and accelerated.

kalman filter orientation estimate