You are planning to use the processed sensor measurements and all of the previously estimated body kinematics for motion analysis and training? Then, it is definitely necessary to be sure about two things beforehand: (a) the general accuracy of the sensor estimates and (b) their robustness against outer influences under the target application scenario.

Accuracy Validation

To validate the accuracy of your implemented methods, it is common to perform a small number of locally restricted motions (for example jumping, walking, dancing, boxing) in a laboratory setting. These motions are captured by your inertial sensors and a ground truth system that is known to be robust and precise. Usually, optical motion capturing offers itself for this task – under meticulous system calibration, it reaches a precision of less than 1mm deviation. After data collection, you can then assess the actual performance of your system by comparing the resulting sensor and ground truth kinematics. But which sensor and camera data can be compared? And how exactly does this comparison work?

Sensor orientations constitute the foundation for the computation of all further body kinematics and are very sensitive to drift. One idea is therefore to evaluate your own sensor orientation estimates ^S_Eq with sensor orientation ^C_Eq obtained from the optical system. For this, it is first necessary to capture the translations and rotations of every sensor with the camera system.
Symmetrically arranged around a sensor, an accumulation of markers and their connecting vectors represents the outer boundaries of the sensor’s casing. As an example, let p_z be the position vector built from M3-M1, p_y the position vector built from M2-M1 and p_x the position vector built from the cross product of p_z and p_y. Those three orthogonal vectors define the sensor’s attitude in the camera frame via a rotation matrix [p_x \ p_y \ p_z], which can then be transformed into the ground truth quaternion ^C_Eq.accuracy validationIn many investigations, estimations from the sensor data are deemed sufficient when their angular errors range within 3-5 degree of deviation to the angular information obtained with the ground truth data. Higher accuracy can often be enforced by additional sensor calibration or better noise models in the fusion filter, but should not be discussed in further detail here. The reason for this is that sensor calibration and model fitting are relatively complicated and require specific equipment or mathematical understanding. Hence they cannot be performed by every person that might be interested in using inertial sensors. Let me therefore explain how the accuracy of the processing framework can be enhanced with alternative methods instead.

Enhancement of Sensor Orientation Estimates

To improve your filter accuracy, it is first important to know which factors influence your sensor measurement and orientation estimation. Experimenting with your collected sensor data, you might already have realized that system accuracy could be both very high and very low. This difference in accuracy usually depends on the settings of your implemented orientation estimation filter that considerably improve or deteriorate the filter output. Accurate results are generally obtained when the filter values and noise simulations are adapted well to the requirements of a certain motion task. Inaccurate results are in return obtained when the filter values and noise simulations are not adapted well to the requirements of a certain motion task. This makes filter settings the largest influence on good or poor data accuracy.

In cases of poor accuracy, it becomes necessary to experimentally adapt filter values until a higher accuracy is achieved. However, this adaptation is often very complicated, time-consuming and tedious and the best setting is nevertheless often hard to find. Filter values might furthermore vary from case to case (respectively from sport to sport). The regulation of sensor noise and drift via the filter settings can therefore be a very difficult task. Especially for users that are not familiar with the underlying mathematical concepts, the adaptation requires rigorous effort and will power. So isn’t there any way to achieve similar enhanced accuracy quality under easier premises?

A different idea is to determine all factors that had an influence on the data quality and use their properties within a collected motion data stream to predict their general noise potential. This knowledge is then in the next step used to process the sensor data in a more specific way. In concrete, the filter settings of the orientation estimator are flexibly chosen in dependence on the determined noise potential. In my investigations, the following factors had an influence on the drift potential of a data stream:

  • The execution speed of a captured motor activity (slow or fast)
  • The structure of the motion (highly frequent, cyclic motions or low frequent, acyclic motion)
  • The motion dimensionality (action taking place along mainly one motion axis or along multiple motion axes)
  • The direction of motion execution (if parallel to the direction of gravity)
  • The general background noise of the present sensor

Ideally, orientation should be estimated on an individual basis under consideration of all previous factors. As a result, the orientation estimation process is not generic and universal anymore, but subject to the characteristics of a sport motion/activity, sensor placement and individual hardware specifications. However, its use should not be more complicated. And here we come to the problem on how to achieve all of that together.

Application

My idea for handling the previous problem is to store a certain number of possible filter values per sensor for every estimation filter. These filter values are defined once during system implementation. They correspond to the identified influences on accuracy and comprise attribute tags like fast, medium and slow motion or one-dimensional and multi-dimensional. Filter values of the processing system could then be dynamically changed in consideration of the current motion characteristics before the main computation.

The previous strategy can be implemented in a very simple set up. A target motion is annotated with respect to its angular velocities and motion dimensions by the system user in a simple, elementary user interface. This interface contains few specific input instructions that describe a motion and that are correlated to the predefined attribute tags. The best suited filter values are then automatically selected from the list of all filter values in accordance with the annotated attribute tags. As a result, the estimation filter is adapted to the captured motion and sensor properties, and the overall system estimate improved in an ubiquitous way.

drift compensation

Stabilizing Magnetic Measurements

Most fusion filters utilize the sensor’s magnetic field measurements as reference for their heading angle. Those field  measurements are however sensitive to disturbances caused by ferromagnetic materials (like steel) and electric environments (like cables, wires or similar electric equipment). As a consequence, the resulting filter estimates can be erroneous and unreliable. Ideally, you should therefore also include a method in your processing pipeline that makes the sensor measurements invariant to magnetic disturbances.

You have no idea how to do that now? Just keep on reading then, because I developed a simple method for compensation of magnetic background distortion that I want to share with you here. In concrete, this method makes use of exactly that fact which causes errors in the first place –  namely the fact that a sensors’ heading angle is defined in reference to the direction of the location’s magnetic field…
The method requires only one additional measurement of all sensors at the start point of your motion performance and can be easily performed before the beginning of your primary data collection. Make sure that the sensors are in rest and closely aligned to the main direction of motion. The measured field information of every sensor then yields a set S_{fV} of individual reference field vectors \vec{fV} that enable us to correct eventual bias in a data capture. First, we can determine the heading difference \psi_{\Delta,n} between \vec{fV_n} and the current measurement vector \vec{mV_n} for every sensor n. Next, we can use this difference as bias offset and correct the current data capture accordingly by adding \psi_{\Delta} to the vector observation:

\vec{mV_n}\prime = \vec{mV_n}+\psi_{\Delta,n}.

Results showed that the method is very effective in removing magnetic background bias. Heading angle estimates became very reliable and hardly deviated from each other. The method is furthermore fast and easy to implement, since it does not require sophisticated hardware knowledge. It is simply added to the general processing pipeline before the estimation of initial orientation and improves accuracy of all resulting and related body kinematics considerably.

sensor processing complete