You want to measure your motion activity in an ubiquitous way and without any restrictions? Then you need wearable (respectively inertial) sensors. You want to analyze human motion under biomechanical aspects? Then it is beneficial to know about the properties commonly used to describe human motion – orientation and position of body segments and joints. Ideally, these are of same accuracy and reliability as optical motion capture data. However, inertial sensors and highly dimensional data accuracy do not really come along with each other. Therefore, it is first necessary to implement a framework that determines kinematic motion information out of the raw acceleration, angular velocities and magnetic field measurements.

The most extensive and important step for the acquisition of kinematic motion data is to estimate orientations of all sensors used. Although various methods have been introduced for this task within the last decades, it can be difficult to implement your own processing system. Many standard methods are abstract, highly mathematical and hence difficult to understand or implement in the beginning. Moreover, it can be difficult to select the best method for a given task or motion performance. For this reason, I give a short summary on three popular orientation estimators. To support you in implementing your own processing pipeline, I furthermore collected some tips and tricks on how to improve the performance of your implemented estimation methods.

Once you know that the orientation of all sensors can be reliably estimated, take a short break and congratulate yourself. You have passed the hardest part of all implementations! And even if you only know about the sensor orientation so far, you already obtained very powerful knowledge. The remaining kinematic properties can then be determined by a few simple computations, such as the estimation of initial posture or the estimation of relative joint positions.

But before we start, let me quickly dwell on something else: what exactly is (by the way) orientation? Even though it appears intuitive, it is worth to make yourself familiar with the orientation representation employed within your chosen motion analysis application. This is particularly important as representations of orientation and their naming conventions differ between research fields and applications. In computer science and engineering, it is for example common to refer to spatial rotations as attitude, whereas they are generally referred to as posture in motion sciences and humanities.

Representing Orientations

Mathematically, orientation can be expressed in different ways. Common representations are Euler angles, rotation vectors, rotation matrices and quaternion representations. Out of these four, Euler angles are the most obvious and intuitive representation type. They are defined by three variables that represent the rotational motion (angles) around the three principal axes. Following their general definitions and naming conventions, those are roll for rotation around the sagittal axis (usually x), pitch for rotation around the transversal axis (usually y) and yaw or azimuth (usually z) for rotation around the vertical axis.

C3_navigation

Because of their intuitive representation, Euler angles are widely used in real-life, human sciences and civil engineering as for example in navigation. However, their use can result in a loss of one dimension of freedom in situations where one of the rotation angles is equal or very close to 90 degrees. As a result, two rotation axes coincide, and the orientation cannot be uniquely described anymore. This ambiguity is also known as gimbal lock.

One strategy to get rid of the problem of gimbal lock is to use a different attitude representation that does not suffer from such singularity. The most common alternative are quaternion representations. They are not only invariant to gimbal lock, but also offer two further advantages. These are (a) the possibility to apply fundamental arithmetic operations on the orientation data, and (b) low computational costs. As a consequence, quaternions are standard for use in complex attitude computations as computer graphics, vision and robotics. They also serve as fundamental representation for all computations presented on this site. Hence, be prepared to carefully read about quaternions, their definitions and specifications [1] first before getting started…!