Optical motion capturing is very fascinating and meaningful when performances of yourself or another person are immediately visualized as a point cloud on the computer screen. However, it is not widely accessible to every interested athlete, coach or spectator. Inertial sensors in contrast can be easily purchased, but their data output generally not be used to directly display kinematic motion parameters. So how could you combine only the positive effects and get an intuitive inertial sensor visualization out of your collected motion data?
Using your implemented processing pipeline, this question can actually be solved relatively easily. Once all relevant joint positions are known, one simple additional computation step produces absolute body positions. These can then get plotted within a figure as a graphical sensor visualization.
Precisely, you can compute full body posture of the sensor equipped athlete like that: first, retain all positional relations within the existing kinematic chains and choose a reference start point/joint for the figure visualization (for example the upper part of the spine). Next, determine all eventual connections between this reference and the origins of the kinematic chains. Rigid body parts that lie in between neighboring joints associated to two different kinematic chains are connected by auxiliary vectors. These are computed in the same way as the segment end position vectors, whereas the length of the body mass between both joints serves as input to the quaternion-vector rotation. As a last step, append the kinematic chains to the chosen reference point and auxiliary vector ends to get the absolute body positions (relative to the chosen starting point) and your complete body posture.
Rendering a Sensor Visualization Movie
The plotting of all absolute body positions, segment orientations and auxiliary vectors per sample creates a full figure sensor visualization of the current frame. Here, the concrete graphic visualization properties are left to your own individual preferences and style ideas. An intuitive data representation similar to the one obtained with an optical motion capture system would be to display joints as dots, and segments and support vectors as linear connections in between the joints. Sequentially plotted, the posture data can then be animated and rendered as moving figures.
It is possible to freely adapt the illustrated system and data processing pipeline with respect to the number and placement of the sensors used. This enables an ubiquitous application to all probable motor training scenarios: by adding or removing sensors or by changing the sensor placement, it is then also possible to visualize (only) particular or specific body parts.