Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Real-Time Recognition and Detection of Bactrocera minax (Diptera: Trypetidae) Grooming Behavior Using Body Region Localization and Improved C3D Network
Previous Article in Journal
Surface Quality Improvement for Ultrasonic-Assisted Inner Diameter Sawing with Six-Axis Force Sensors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Measurement of Shoulder Abduction Angle with Posture Estimation Artificial Intelligence Model

Department of Orthopaedic Surgery, Kobe University Graduate School of Medicine, Kobe 650-0017, Japan
*
Author to whom correspondence should be addressed.
Sensors 2023, 23(14), 6445; https://doi.org/10.3390/s23146445
Submission received: 25 May 2023 / Revised: 10 July 2023 / Accepted: 14 July 2023 / Published: 16 July 2023
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Substantial advancements in markerless motion capture accuracy exist, but discrepancies persist when measuring joint angles compared to those taken with a goniometer. This study integrates machine learning techniques with markerless motion capture, with an aim to enhance this accuracy. Two artificial intelligence-based libraries—MediaPipe and LightGBM—were employed in executing markerless motion capture and shoulder abduction angle estimation. The motion of ten healthy volunteers was captured using smartphone cameras with right shoulder abduction angles ranging from 10° to 160°. The cameras were set diagonally at 45°, 30°, 15°, 0°, −15°, or −30° relative to the participant situated at a distance of 3 m. To estimate the abduction angle, machine learning models were developed considering the angle data from the goniometer as the ground truth. The model performance was evaluated using the coefficient of determination R2 and mean absolute percentage error, which were 0.988 and 1.539%, respectively, for the trained model. This approach could estimate the shoulder abduction angle, even if the camera was positioned diagonally with respect to the object. Thus, the proposed models can be utilized for the real-time estimation of shoulder motion during rehabilitation or sports motion.

1. Introduction

The assessment of the range of motion (ROM) of the shoulder joint is crucial in the medical field for diagnosis, evaluation of disability severity, and appraisal of treatment outcomes of surgical interventions [1,2,3]. Therefore, an accurate measurement of shoulder ROM enables healthcare professionals to determine the extent of joint dysfunction and monitor the progress of treatment and rehabilitation. The universal goniometer (UG) is the most widely applied method for measuring shoulder-joint ROM in clinical settings owing to its low cost, portability, and ease of use [4,5]. However, UG measurements cannot evaluate joint angles during movement. Alternative methods, such as 3-dimensional gyroscopes [6,7], marker motion capture systems [8,9,10], inertial sensors and magnetic sensors [11,12,13,14], are limited by high costs, poor accessibility, requirement for skilled operators, and environmental constraints. Challenges in acquiring valid and repeatable data in human subjects can arise due to the relative motion and location of the skin, where markers are placed, with respect to the actual skeletal movement and location, as highlighted in previous research studies [15]. Recent advancements in computer vision and marker-less techniques have promoted the development of posture–estimation algorithms that can track human motion with high accuracy and minimal technical requirements [16,17,18,19,20,21]. As such, these algorithms can potentially revolutionize joint–angle assessments in clinical settings by overcoming the limitations of existing methods. Although still limited, there is an increasing body of literature supporting the validity of markerless motion capture systems when compared to traditional marker-based systems. For example, a study by Drazan et al. investigated lower limb angles during vertical jumps [22], while Tanaka et al. focused on lower limb joint angles during the Functional Reach Test [23]. Additionally, Wochatz et al. examined lower limb joint angles during movements including squats [24]. Nonetheless, previous studies demonstrated that camera-based posture-estimation methods entail uncertainties related to camera angles [25,26] as well as the size and ROM of body parts [27], which affect the accuracy of the joint–angle measurements. In view of the uncertainties involved in camera-based posture-estimation methods, a potential solution to this conundrum lies in the proposal to employ multiple cameras in markerless motion capture systems [28].
Within the field of clinical measurements in rehabilitation, numerous studies utilizing markerless motion capture have been conducted, many of which employ RGB-D cameras such as Microsoft Kinect [29]. RGB-D stands for “Red, Green, Blue—Depth”, referring to a type of camera that produces 3D images by combining color and distance information [30]. Similarly, reports exist regarding the use of RGB-D cameras in markerless motion capture systems for shoulder angle measurement. For instance, Gritsenko et al. [31] utilized Kinect to measure shoulder abduction in post-breast cancer surgery patients, while Beshara et al. [32] examined the reliability and validity of shoulder joint range of motion measurements by integrating wearable inertial sensors and Microsoft Kinect, with both studies reporting favorable accuracy. Moreover, with the evolution of image processing technology, depth estimation has become feasible using only RGB color imagery, enabling both tracking and machine learning tasks. This has been facilitated by a range of algorithms that allow for the recognition of human forms and the calculation of joint positions within a three-dimensional (3D) space [33]. As such, approaches employing RGB color imagery provide a more economical and practical alternative compared to methods dependent on RGB-D devices.
MediaPipe, developed by Google, is an algorithm in the domain of RGB capture. It is a universal open-source platform capable of operating on a multitude of systems. In principle, it utilizes a lightweight convolutional neural network architecture, which is specifically adjusted for real-time inference on mobile devices, for estimating 3D human posture [34]. MediaPipe BlazePose (hereafter referred to as “MediaPipe”) can evaluate the (x, y, z) coordinates of 33 skeletal key points for an individual from RGB images, thereby providing an attractive option for joint–angle assessments. Although MediaPipe has demonstrated superior accuracy in comparison to other posture-estimation methods, it exhibits certain limitations [35]. Existing reports suggest that MediaPipe can measure limb movements with an accuracy comparable to KinectV2 [36]; however, studies based on MediaPipe are still relatively scarce. Based on our preliminary experiments, we observed that shoulder abduction angles evaluated from coordinates detected by MediaPipe exhibited a tendency for errors. These errors increased with variations in the camera position and increasing abduction angles. These findings highlight the need for further refining of the algorithm to improve its accuracy and applicability in clinical settings. Hence, this study aimed to investigate the possibility of enhancing the detection accuracy of a shoulder-joint abduction angle by combining machine learning (ML) with the coordinate data obtained from smartphone camera images using MediaPipe for estimating the shoulder abduction angles. By addressing the limitations of the existing methods, the proposed approach aims to develop a more accurate and accessible method for assessing shoulder joint angles during motion. Therefore, this advancement is expected to improve the accuracy of diagnoses and the evaluation of treatment outcomes in patients with shoulder joint disorders, which will ultimately enhance patient care and support clinical decision-making.

2. Materials and Methods

2.1. Participants

For the assessment of right-shoulder joint angles, this study included ten healthy adult participants (five males and five females; mean age: 35 ± 5 years; mean height: 166.3 ± 8.1 cm; BMI: 22.1 ± 1.7). In particular, all participants were right-handed and volunteered in this study. The participants were instructed to perform abduction movements of the right shoulder joint in a standing position, facing forward. The researcher provided verbal instructions regarding the initial and terminal actions to be performed, and an experienced physical therapist communicated the desired actions to the volunteers. The study was approved by the Kobe University Review Board (approval number: 34261514) and informed consent was obtained from all participants.

2.2. Goniometric Measurements

The goniometric measurements were performed by two raters: evaluator A was an orthopedic surgeon with 8 years of clinical experience, and evaluator B was a physical therapist with 10 years of clinical experience. The participants were instructed to presume a standing position, and the measurements were performed using a Todai goniometer 200 mm (medical device notification number: 13B3X00033000015, Japan) according to the method described by Clarkson [37] for measuring the supine position (Figure 1). The participants—equipped with a strong magnetic wristband on their right hand (Figure 2)—were positioned in front of a steel wall, with their rear side in tight contact with the wall. Based on the UG measurements, the magnet was set at an angle to restrict the motion of the upper arm. Accordingly, the horizontal flexion and extension of the shoulder joint were performed at 0°, and all measurements were repeated twice at abduction angles of 10–160° in increments of 10°.

2.3. Data Acquisition and Image Processing by MediaPipe

After setting the shoulder-joint abduction angle, a tablet device (iPhone SE3, Apple Inc., Cupertino, CA, USA) was positioned 3 m from the participant, at a height of 150 cm above the floor. The camera was set at diagonal positions of 45°, 30°, 15°, 0°, −15°, and −30° relative to the participant standing at a distance of 3 m. In particular, the camera positioned 15° to the right of the participant was denoted as 15°, and that positioned 15° to the left was denoted as −15° (Figure 3), i.e., right and left placements were accounted as positive and negative diagonal positions, respectively. All video recordings were captured in 1080p HD at 30 fps by a designated examiner (K.M.), with each angle recorded for approximately 2 s. The video files were processed using the MediaPipe Pose Python library to obtain the joint coordinates (x, y, z). The x- and y-coordinates represent the horizontal and vertical coordinates from the detected hip joint center, respectively, whereas the z-coordinate represents the estimated distance of the object from the camera, i.e., low z-values indicate the proximity of the object from the camera. Among the 33 joint coordinates detected by MediaPipe [34] (Figure 4), the coordinates of the shoulder joints, hip joints, elbow joints, and nose were used for measurement. An example of an image analyzed using MediaPipe is illustrated in Figure 5, wherein the distance, angle, and area parameters were calculated using the coordinate data and vector calculations.
By following these steps, the angles, distances, and areas were evaluated using vector representations. First, a vector was created by subtracting the coordinates of the starting joint from those of the ending joint. For instance, the coordinates of the right shoulder joint were subtracted from those of the right elbow joint to construct a vector directed from the right shoulder toward the right elbow. The length of the vector is denoted by | a | and is calculated using the Euclidean distance formula:
a = x 2   x 1 2 + y 2   y 1 2 + z 2   z 1 2 .
To calculate the ratio of the vector lengths, the length of vector a was divided by that of vector b . In principle, this ratio provides information on the relative positioning of the joints:
Ratio a , b = a / b .
Subsequently, the angle between vectors a and b was calculated using the dot-product formula and vector lengths. The arc–cosine function was used to compute the angle for a given cosine value. For calculating the 2D angles, only the x- and y-coordinates were used, excluding the z-coordinate.
Angle a , b =   arccos a · b / a   b
Here, the dot-product of a · b was evaluated as follows:
a ·   b = a x     b x + a y     b y + a z   b z .
Furthermore, the area between each detected coordinate was defined using the cross-product function, employing the outer product of the vectors as follows:
Area a ,   b = 0.5     CrossProduct a ,   b ,
where the cross-product a × b was calculated as follows:
a × b = a x     b y     a y     b x .

2.4. Machine Learning (ML)

We compared the performances of the two ML algorithms—linear regression and LightGBM [38], which is a gradient boosting framework based on decision-tree learning algorithms—to estimate the shoulder abduction angle using the parameters evaluated from the estimated joint coordinates. Linear regression is a classical regression model, whereas LightGBM offers improved computational efficiency, reduced memory occupancy, and enhanced classification accuracy, while preventing overfitting. It has been used earlier to estimate hand postures from RGB images [39]. The machine-learning library Scikit-learn in Python was used for model training, and the workflow of the current experiment is illustrated in Figure 6. First, we measured the accuracy of estimating the shoulder abduction angle from the parameters derived from the images at fixed camera positions (① estimation of the shoulder abduction at the fixed camera position). Thereafter, we created a model for estimating the camera position (② estimating the camera installation position model). Following that, we incorporated the “estimate_camAngle” parameter derived from this model into the development of another model (③ estimating the shoulder abduction model at any camera installation position), which allows the detection of the shoulder abduction angle, regardless of the camera position.
In total, 66,032 images were recorded at six camera angles for 10 participants with 16 distinct shoulder abduction angles ranging from 0° to 160°. The acquired images were randomly segmented into training samples (80%) for hyperparameter tuning by generating ML models and validation samples (20%) to verify the performance of each model. After determining the optimal hyperparameters for each ML algorithm using the training samples, the coefficient of determination (R2), mean absolute percentage error (MAPE), and mean absolute error (MAE) were selected as performance metrics for comparing the accuracy of the employed models. The figure uses two abbreviations: Permutation feature importance and Shapley Additive exPlanations (SHAP) value. Briefly, Permutation feature importance refers to a technique for calculating the significance of different input features to the model’s predictive performance by randomly shuffling each feature and observing the effect on model accuracy. SHAP values, on the other hand, provide a measure of the contribution of each feature to the prediction for each sample, based on game theory. Detailed explanations of these terms will follow in the “Statistical analysis” section.

2.5. Parameters

The parameters used in the analysis, including a brief description of each parameter (Figure 7), are listed below, discussing the parameters related to the right shoulder. The parameters used for each ML model are presented in Table 1. The parameters of the faceangle and trunk (trunkAngle, trunksize) were regarded as being more indicative of the body’s direction rather than the shoulder joint angle. Consequently, they were utilized in the “Estimation of camera installation position model”.
  • rtarm_distratio: The ratio of the length between the right shoulder and right elbow to that between the right shoulder and right hip joint (Figure 7: ①/②), representing the relative positional relationship of the elbow with respect to the shoulder and hip joints.
  • rtelbowhip_distratio: The ratio of the length between the right elbow and the right hip joint to that between the right shoulder to the right hip joint (Figure 7: ③/②), reflecting the relative positional relationship of the elbow and hip joints with respect to the shoulder.
  • rthip_distratio: The ratio of the length between the right shoulder and the right hip joint to that between the hip joints (Figure 7: ④/②), representing the relative positional relationship of the waist with respect to the shoulder.
  • rtshoulder_distratio: The ratio of the length between the shoulder joints to that between the right shoulder and right hip joint (Figure 7: ⑤/②), clarifying the relative positional relationship of the shoulder with respect to the hip joint.
  • rtshoulder abduction: Calculate angle ⑥ in Figure 7 from the 2D coordinates to represent the abduction angle of the right shoulder in the 2D space.
  • rtshoulder_3Dabduction: Calculate angle ⑥ in Figure 7 from 3D coordinates to represent the abduction angle of the right shoulder in the 3D space.
  • rtshoulderAngle: Calculate the angle ⑦ in Figure 7 from 2D coordinates to represent the angle between the right shoulder, right elbow, and right waist in the 2D space.
  • rtshoulder_3Dangle: Calculate the angle ⑦ in Figure 7 from 3D coordinates to represent the angle between the right shoulder, right elbow, and right waist in the 3D space.
  • rt_uppertrunkAngle: Calculate angle ⑧ in Figure 7 from 2D coordinates to represent the angle between the right shoulder, upper trunk, and left shoulder in the 2D space.
  • lt_uppertrunkAngle: Calculate angle ⑨ in Figure 7 from 2D coordinates to represent the angle between the left shoulder, upper trunk, and right shoulder in the 2D space.
  • rt_lowertrunkAngle: Calculate angle ⑩ in Figure 7 from 2D coordinates to represent the angle between the right waist, lower trunk, and left waist in the 2D space.
  • lt_lowertrunkAngle: Calculate angle ⑪ in Figure 7 from 2D coordinates to represent the angle between the left waist, lower trunk, and right waist in the 2D space.
  • rt_faceAngle: Calculate angle ⑫ in Figure 7 from 2D coordinates to represent the angle between the right side, center, and left side of the face in the 2D space.
  • lt_faceAngle: Calculate angle ⑬ in Figure 7 from 2D coordinates to represent the angle between the left side, center, and right side of the face in the 2D space.
  • rt_trunksize: As portrayed in Figure 7, calculate the magnitude of the cross-product of the vector from the right shoulder to the left shoulder ( a ) and the vector of the length of the right trunk ( b ), divided by the square of the right trunk length, representing the relative size of the right trunk area in the 2D space.
rt _ trunksize = a × b / b 2
  • lt_trunksize: As depicted in Figure 7, calculate the magnitude of the cross-product of the vector from the right shoulder to the left shoulder ( a ) and the vector of the length of the left trunk ( c ), divided by the square of the left trunk length, representing the relative size of the left trunk area in the 2D space.
lt _ trunksize = a ×   c / c 2

2.6. Statistical Analysis

Statistical analyses were performed using R Studio (R Studio PBC, Boston, MA, USA). The data were presented as mean values and standard deviations, and the statistical significance was indicated by p < 0.001. In addition, the significance value of each predictive parameter was calculated using two distinct algorithms. The significance of the permutation feature was defined as the amount by which the model score decreased upon randomly shuffling the value of a single feature. Specifically, to evaluate the significance of a certain feature, we generated a dataset with the shuffled values of the given feature, and the resulting variations in the model score were compared to the original dataset [40]. For example, the permutation feature significance of rtshoulder_distratio was calculated by shuffling its values and tallying the resulting variations in the model score. In addition, the SHAP values were defined as the contribution of each feature to the model predictions based on game theory. Furthermore, we assessed the contribution of each feature to the prediction [41]. For instance, to evaluate the impact of rt_ uppertrunkangle on the prediction, the model was trained with the remaining features, excluding rt_ uppertrunkangle, and the deviation in the model scores were evaluated. The SHAP values are insightful for understanding the significance of individual features. All ML model analyses were performed using the library Scikit-learn v1.0.2 in Python v3.8 environment.

3. Results

3.1. Estimation of Shoulder Abduction at the Fixed Camera Angle

The model was trained using the parameters listed in Table 1. In particular, shoulder_3Dabduction, shoulder abduction, and rtelbowhip_distratio exhibited strong positive correlations with the shoulder-joint abduction angle measured using the UG for each camera angle, whereas rtarm_distratio, rtshoulder_distratio, and rthip_distratio were negatively correlated. The accuracies of the ML models for each camera angle are summarized in Table 2. Compared with linear regression, LightGBM was more accurate for all camera angles. Therefore, in further experiments, we considered only the LightGBM cells.

3.1.1. Estimating the Camera Installation Position Model

The model was trained using the LightGBM with the parameters listed in Table 1. As the MAPE of this model could not be evaluated, its MAE was evaluated for a performance comparison. The fixed-angle camera installation estimation model exhibited adequate accuracy, with a coefficient of determination R2 = 0.996 and an MAE of 0.713°.

3.1.2. Estimating the Shoulder Abduction Model Irrespective of the Camera Position

As part of the explanatory data analysis (EDA), a heatmap representing the correlation between each parameter is illustrated in Figure 8. According to the heatmap, the actual angle measured by the UG was positively correlated with rtshoulder_3Dabduction, rtshoulderAbduction, and rtelbowhip_distratio, and negatively correlated with ltarm_distratio. The correlation coefficients between each parameter and the actual angle are summarized in Table 3. As all parameters were correlated with the true abduction angle, they were used to train the LightGBM model. The model performance score for the test data demonstrated a strong positive correlation between the actual angle measured by the UG and predicted values, with an R2 = 0.997 and an MAPE of 1.566%. To identify the significance of each parameter for predicting the shoulder abduction angle, we evaluated the feature importance. Overall, rtshoulder_3Dabduction, rtelbowhip_distratio, and rtshoulder_3Dangle were ranked as the most essential parameters in both the feature importance plot (Figure 9a) and SHAP scores (Figure 9b).

4. Discussion

In this study, we accurately estimated the shoulder-joint abduction angle at various camera angles by combining MediaPipe with ML models. As this paper is the first report employing such an approach, it can be deemed as novel. In the initial experiment, the camera was set at various angles to estimate the shoulder abduction angle. The camera was set at six distinct positions relative to the subject, and the shoulder-joint abduction angle was estimated using the parameters obtained from the images at each camera position combined with ML. The preliminary experiment results revealed that the error in detecting the right shoulder coordinates by the MediaPipe increased with the right shoulder abduction angle. The shouderAbduction and shoulder_3Dabduction parameters represent the angles calculated using the shoulder and hip coordinates detected by MediaPipe in 2D and 3D, respectively. Therefore, they are not equivalent to the shoulder-joint abduction angles measured using the UG. Accordingly, several parameters were adapted for model training to accurately estimate the shoulder abduction angle. In the case of diagonal positions, the center of the shoulder and waist from the RGB images could not be accurately detected, which can produce errors. However, using several parameters, we could develop an ML model with relatively high accuracy, even when the camera was placed diagonally relative to the participant. The second stage of this experiment involved estimating the camera position from the participants’ images. At this stage, the faceAngle parameter, calculated using the coordinates of the nose and the left and right shoulders, exhibited a strong correlation with the camera installation angle. In general, face detection is the most advanced technique for estimating human posture. The estimated position of the face, especially the nose, was less affected by the attire and body shape of the participant, thereby contributing to a higher accuracy than other joint adjustments. Adopting the coordinates of the facial position was highly effective for estimating the camera installation position. Third, we developed a two-stage model to estimate the shoulder abduction angle after estimating the camera installation angle. This enabled us to estimate the shoulder abduction angle without considering the camera position. The coefficient of determination, R2, is useful for evaluating regression analysis, and an accurate prediction is obtained when R2 is approximate to 1 [42]. In addition, the MAPE was used for accuracy evaluation, considering MAPE ≤ 5% = excellent match, 5% < MAPE ≤ 10% = adequate match, 10% < MAPE ≤ 15% = acceptable match, and MAPE > 15% = unacceptable match [43]. The shoulder abduction angle estimation model exhibited high accuracy, with an R2 = 0.997 and an MAPE of 1.539% between the angles measured by the UG and the predicted values. The precision of our proposed method gains further clarity when placed in contrast with existing literature that has evaluated shoulder abduction using markerless motion capture techniques. Beshara et al. [32] undertook an assessment of shoulder abduction using inertial sensors and Microsoft Kinect, drawing a comparison to goniometer measurements, and subsequently reported a high degree of reliability with an Intraclass Correlation Coefficient (ICC) of 0.93 and inter-rater discrepancies of no more than ±10°. Similarly, Lafayette et al. [36] appraised shoulder joint angles utilizing an RGB-D camera in conjunction with MediaPipe. Despite endorsing MediaPipe as the most accurate method in their study, they also reported an absolute deviation of 10.94° in the anterior plane and 13.87° when assessed at a 30° oblique. Conversely, in our methodology, even with an augmentation to six distinct camera placements, a high degree of accuracy was sustained with MAPE of 1.539% across ROM spanning from 0 to 160°. The positive correlations between the true_angle and rtshoulder_3DAbduction, rtshoulderAbduction, and rtelbowhip_distratio, all of which indicate the position of the right shoulder relative to the right hip and right elbow, were consistent with the positive correlation with the right shoulder joint angle. In the medical field, explainable artificial intelligence (XAI) is a collection of tools and frameworks aiming at understanding the decision-making process of ML models, while maintaining high predictive accuracy and reliability, and its significance has been emphasized in previous research [44]. Prior research incorporated SHAP and permutation feature importance analyses to ensure transparency and interpretability [40]. The permutation feature importance and SHAP scores of the current model for shoulder-joint abduction angle estimation exhibited high values for rtshoulder_3Dabduction and rtelbowhip_distraction, which represented the positions of the elbow and hip joints relative to the shoulder. These results confirmed that the angles calculated from the vectors as well as the distances between each coordinate, are crucial parameters for estimating the shoulder abduction angle using ML models. Thus, by combining posture-estimation AI MediaPipe and ML with LightGBM, the shoulder-joint abduction angle can be accurately estimated, even if the camera is positioned diagonally with respect to the participant. Therefore, a further refinement of this method enables the accurate, real-time estimation of shoulder joint movements during rehabilitation or sporting activities using RGB capture devices, which are considerably more cost-effective than RGB-D cameras. This approach thus promises to significantly enhance the accessibility and affordability of high-precision motion capture for broader applications.

Limitations

This study posed several limitations. First, the American Academy of Orthopedic Surgeons defines the shoulder-joint abduction angle as a value between 0° and 180° [45]. In our study, we regarded a shoulder abduction angle of 160° as the upper limit because several participants could not achieve a shoulder abduction angle of 170° or 180°. Second, although the UG measurements were recorded at intervals of 10°, more precise ROM measurements may be required in clinical practice. Therefore, the extent of data should be increased by measuring the angles in smaller intervals. Third, the camera angle was adjusted from −30° to 45° in increments of 15°. One particular limitation in our study design was the absence of the −45° camera angle. The primary reason for not including this angle was that for larger-bodied participants, there was a potential that the right elbow location would not be fully captured in the image, leading to incomplete analysis. However, if this approach is applied to rehabilitation or sports motion analysis, a greater number of camera angle variations may be required, including potentially the −45° angle with necessary adjustments for larger-bodied participants. Fourth, the placement of the strong magnetic wristband on the dorsal part of the participant’s wrist likely resulted in an external rotation of the entire arm during the experiment. This could potentially affect the accuracy of our measurements, particularly when considering different physiological configurations such as placement on the ulnar styloid process. Fifth, although only the shoulder-joint abduction angle was examined, the shoulder joint can undergo various ROMs, including flexion and internal/external rotation. Therefore, the application of the current model to clinical practice may be limited for analyzing motor movements.
In future, the follow-up study will focus on further development of the proposed model with extensive data, including considering alternate wristband placement to avoid any unintentional bias in measurements, and capturing more complete ROMs for various body types and camera angles. In summary, the proposed approach, combining pose estimation AI and ML models, is advantageous for human motion analysis, despite its requirement for additional data.

5. Conclusions

In this study, we demonstrated the potential of employing two AI-based libraries, MediaPipe and LightGBM, for markerless motion capture and the estimation of shoulder abduction angles. Ten healthy participants were included, with shoulder abduction angles captured using smartphone cameras positioned at various diagonal angles. We utilized MediaPipe to detect the positions of key body parts such as the shoulders, elbows, hips, and nose. Additionally, we calculated the distances, angles, and areas between each joint to set the parameters accordingly. These parameters were employed as training data for the LightGBM, which yielded promising results. Moreover, considering the goniometer angle data as the ground truth, we developed ML models to estimate the abduction angle of shoulder joints. The coefficients of determination, R2 and MAPE, were applied for model evaluation, with the trained model yielding an R2 = 0.988 and an MAPE of 1.539%.
The proposed approach demonstrated the ability to estimate shoulder abduction angles even if the camera was positioned diagonally with respect to the participant. Therefore, the proposed approach has potential implications for the real-time estimation of shoulder motion during rehabilitation or sports activities. This study proposes a low-cost, high-accuracy, deep transfer learning-based image-based technique for detecting shoulder abduction angles, which exhibits a superior performance compared to conventional methods. Consequently, it enables the effective and timely estimation of shoulder abduction angles, thereby facilitating practical applications in various settings.
In conclusion, this study presents a valuable advancement in AI-based markerless motion capture for joint angle estimation. The innovative application of MediaPipe to detect body landmarks, calculate distances, angles, and areas between joints, and parameter setting for LightGBM was validated to be effective. These findings establish a solid foundation for future exploration and innovation in this field, with practical applications beyond the assessment of shoulder abduction. In future research, we envision broadening the range of joint movements assessed, such as shoulder flexion, internal and external rotation, and evaluating lower limb joint angles, by increasing the number of training angles. Additionally, we intend to apply machine learning to specific movements for more detailed motion analysis.

Author Contributions

Conceptualization, M.K.; methodology, Y.M.; software, A.I. and H.N.; validation, S.T., T.F., T.K., M.K. and I.S.; formal analysis, S.T.; investigation, S.T.; resources, I.S., T.Y. and A.I.; data curation, S.T.; writing—original draft preparation, S.T.; writing—review and editing, A.I. and Y.M.; visualization, S.T.; supervision, Y.M.; project administration, R.K.; funding acquisition, R.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Kobe University Review Board (approval number: 34261514. Approval date 2 November 2022).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Written informed consent has been obtained from the patients to publish this paper.

Data Availability Statement

The data presented in this study are available upon request from the corresponding author. The data are not publicly available because of confidentiality concerns.

Conflicts of Interest

The authors declare no conflict of interest. The sponsors had no role in the design, execution, interpretation, or writing of the study.

References

  1. Green, S.; Buchbinder, R.; Glazier, R.; Forbes, A. Systematic review of randomized controlled trials of interventions for painful shoulder: Selection criteria, outcome assessment, and efficacy. BMJ 1998, 316, 354–360. [Google Scholar] [CrossRef]
  2. Muir, S.W.; Corea, C.L.; Beaupre, L. Evaluating change in clinical status: Reliability and measures of agreement for the assessment of glenohumeral range of motion. N. Am. J. Sports Phys. Ther. 2010, 5, 98–110. [Google Scholar] [PubMed]
  3. Vecchio, P.C.; Kavanagh, R.T.; Hazleman, B.L.; King, R.H. Community survey of shoulder disorders in the elderly to assess the natural history and effects of treatment. Ann. Rheum. Dis. 1995, 54, 152–154. [Google Scholar] [CrossRef]
  4. Milanese, S.; Gordon, S.J.; Buettner, P.; Flavell, C.; Ruston, S.; Coe, D.; O’Sullivan, W.; McCormack, S. Reliability and concurrent validity of knee angle measurement: Smart phone app versus universal goniometer used by experienced and novice clinicians. Man. Ther. 2014, 19, 569–574. [Google Scholar] [CrossRef] [PubMed]
  5. Brosseau, L.; Tousignant, M.; Budd, J.; Chartier, N.; Duciaume, L.; Plamondon, S.; O’Sullivan, J.P.; O’Donoghue, S.; Balmer, S. Intratester and intertester reliability and criterion validity of the parallelogram and universal goniometers for active knee flexion in healthy subjects. Physiother. Res. Int. 1997, 2, 150–166. [Google Scholar] [CrossRef]
  6. El-Zayat, B.F.; Efe, T.; Heidrich, A.; Wolf, U.; Timmesfeld, N.; Heyse, T.J.; Lakemeier, S.; Fuchs-Winkelmann, S.; Schofer, M.D. Objective assessment of shoulder mobility with a new 3D gyroscope—A validation study. BMC Musculoskelet Disord. 2011, 12, 168. [Google Scholar] [CrossRef]
  7. El-Zayat, B.F.; Efe, T.; Heidrich, A.; Anetsmann, R.; Timmesfeld, N.; Fuchs-Winkelmann, S.; Schofer, M.D. Objective assessment, repeatability, and agreement of shoulder ROM with a 3D gyroscope. BMC Musculoskelet Disord. 2013, 14, 72. [Google Scholar] [CrossRef] [PubMed]
  8. Cheng, S.Y.; Trivedi, M.M. Human posture estimation using voxel data for “Smart” airbag systems: Issues and framework. In Proceedings of the IEEE Intelligent Vehicles Symposium, Parma, Italy, 14–17 June 2004; pp. 84–89. [Google Scholar]
  9. Yahya, M.; Shah, J.A.; Kadir, K.A.; Yusof, Z.M.; Khan, S.; Warsi, A. Motion capture sensing techniques used in human upper limb motion: A review. Sensor Rev. 2019, 39, 504–514. [Google Scholar] [CrossRef]
  10. Wu, G.; van der Helm, F.C.T.; Veeger, H.E.J.; Makhsous, M.; Van Roy, P.; Anglin, C.; Nagels, J.; Karduna, A.R.; McQuade, K.; Wang, X.; et al. ISB recommendation on definitions of joint coordinate systems of various joints for the reporting of human joint motion—Part II: Shoulder, elbow, wrist and hand. J. Biomech. 2005, 38, 981–992. [Google Scholar] [CrossRef]
  11. Picerno, P. 25 years of lower limb joint kinematics by using inertial and magnetic sensors: A review of methodological approaches. Gait Posture. 2017, 51, 239–246. [Google Scholar] [CrossRef]
  12. Parel, I.; Cutti, A.G.; Fiumana, G.; Porcellini, G.; Verni, G.; Accardo, A.P. Ambulatory measurement of the scapulohumeral rhythm: Intra- and inter-operator agreement of a protocol based on inertial and magnetic sensors. Gait Posture. 2012, 35, 636–640. [Google Scholar] [CrossRef]
  13. Yamaura, K.; Mifune, Y.; Inui, A.; Nishimoto, H.; Kataoka, T.; Kurosawa, T.; Mukohara, S.; Hoshino, Y.; Niikura, T.; Nagamune, K.; et al. Accuracy and reliability of tridimensional electromagnetic sensor system for elbow ROM measurement. J. Orthop. Surg. Res. 2022, 17, 60. [Google Scholar] [CrossRef]
  14. Mukohara, S.; Mifune, Y.; Inui, A.; Nishimoto, H.; Kurosawa, T.; Yamaura, K.; Yoshikawa, T.; Shinohara, I.; Hoshino, Y.; Nagamune, K.; et al. A new quantitative evaluation system for distal radioulnar joint instability using a three-dimensional electromagnetic sensor. J. Orthop. Surg. Res. 2021, 16, 452. [Google Scholar] [CrossRef]
  15. Morton, N.A.; Maletsky, L.P.; Pal, S.; Laz, P.J. Effect of variability in anatomical landmark location on knee kinematic description. J. Orthop. Res. 2007, 25, 1221–1230. [Google Scholar] [CrossRef] [PubMed]
  16. Faisal, A.I.; Majumder, S.; Mondal, T.; Cowan, D.; Naseh, S.; Deen, M.J. Monitoring Methods of Human Body Joints: State-of-the-Art and Research Challenges. Sensors 2019, 19, 2629. [Google Scholar] [CrossRef] [PubMed]
  17. Stenum, J.; Cherry-Allen, K.M.; Pyles, C.O.; Reetzke, R.D.; Vignos, M.F.; Roemmich, R.T. Applications of Pose Estimation in Human Health and Performance across the Lifespan. Sensors 2021, 21, 7315. [Google Scholar] [CrossRef]
  18. Toshev, A.; Szegedy, C. DeepPose: Human pose estimation via deep neural networks. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 1653–1660. [Google Scholar]
  19. Pishchulin, L.; Insafutdinov, E.; Tang, S.; Andres, B.; Andriluka, M.; Gehler, P.; Schiele, B. DeepCut: Joint Subset Partition and Labeling for Multi Person Pose Estimation. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4929–4937. [Google Scholar]
  20. Insafutdinov, E.; Andriluka, M.; Pishchulin, L.; Tang, S.; Levinkov, E.; Andres, B.; Schiele, B. ArtTrack: Articulated Multi-Person Tracking in the Wild. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 6457–6465. [Google Scholar]
  21. Martinez, G.H.; Raaj, Y.; Idrees, H.; Xiang, D.; Joo, H.; Simon, T.; Sheikh, Y. Single-network whole-body pose estimation. In Proceedings of the IEEE International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 6982–6991. [Google Scholar]
  22. Drazan, J.F.; Phillips, W.T.; Seethapathi, N.; Hullfish, T.J.; Baxter, J.R. Moving outside the lab: Markerless motion capture accurately quantifies sagittal plane kinematics during the vertical jump. J. Biomech. 2021, 125, 110547. [Google Scholar] [CrossRef]
  23. Tanaka, R.; Ishikawa, Y.; Yamasaki, T.; Diez, A. Accuracy of classifying the movement strategy in the functional reach test using a markerless motion capture system. J. Med. Eng. Technol. 2019, 43, 133–138. [Google Scholar] [CrossRef]
  24. Wochatz, M.; Tilgner, N.; Mueller, S.; Rabe, S.; Eichler, S.; John, M.; Völler, H.; Mayer, F. Reliability and validity of the Kinect V2 for the assessment of lower extremity rehabilitation exercises. Gait. Posture 2019, 70, 330–335. [Google Scholar] [CrossRef] [PubMed]
  25. Yeung, L.-F.; Yang, Z.; Cheng, K.C.-C.; Du, D.; Tong, R.K.-Y. Effects of camera viewing angles on tracking kinematic gait patterns using Azure Kinect, Kinect v2 and Orbbec Astra Pro v2. Gait Posture 2021, 87, 19–26. [Google Scholar] [CrossRef]
  26. Qu, Y.; Hwang, J.; Lee, K.S.; Jung, M.C. The effect of camera location on observation-based posture estimation. Ergonomics 2012, 55, 885–897. [Google Scholar] [CrossRef]
  27. Bao, S.; Howard, N.; Spielholz, P.; Silverstein, B. Two posture analysis approaches and their application in a modified rapid upper limb assessment evaluation. Ergonomics 2007, 50, 2118–2136. [Google Scholar] [CrossRef] [PubMed]
  28. Armitano-Lago, C.; Willoughby, D.; Kiefer, A.W. A SWOT Analysis of Portable and Low-Cost Markerless Motion Capture Systems to Assess Lower-Limb Musculoskeletal Kinematics in Sport. Front. Sports Act. Living. 2022, 3, 809898. [Google Scholar] [CrossRef]
  29. Lam, W.W.T.; Tang, Y.M.; Fong, K.N.K. A systematic review of the applications of markerless motion capture (MMC) technology for clinical measurement in rehabilitation. J. Neuro Eng. Rehabil. 2023, 20, 57. [Google Scholar] [CrossRef]
  30. Gauci, M.O.; Olmos, M.; Cointat, C.; Chammas, P.E.; Urvoy, M.; Murienne, A.; Bronsard, N.; Gonzalez, J.F. Validation of the shoulder range of motion software for measurement of shoulder ranges of motion in consultation: Coupling a red/green/blue-depth video camera to artificial intelligence. Int. Orthop. 2023, 47, 299–307. [Google Scholar] [CrossRef]
  31. Gritsenko, V.; Dailey, E.; Kyle, N.; Taylor, M.; Whittacre, S.; Swisher, A.K. Feasibility of using low-cost motion capture for automated screening of shoulder motion limitation after breast cancer surgery. PLoS ONE 2015, 10, e0128809. [Google Scholar] [CrossRef]
  32. Beshara, P.; Chen, J.F.; Read, A.C.; Lagadec, P.; Wang, T.; Walsh, W.R. The Reliability and validity of wearable inertial sensors coupled with the microsoft kinect to measure shoulder range-of-motion. Sensors 2020, 20, 7238. [Google Scholar] [CrossRef]
  33. Cao, Z.; Simon, T.; Wei, S.E.; Sheikh, Y. Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7291–7299. [Google Scholar]
  34. Bazarevsky, V.; Grishchenko, I.; Raveendran, K.; Zhu, T.; Zhang, F.; Grundmann, M. Blazepose: On-device real-time body pose tracking. arXiv 2020, arXiv:2006.10204. [Google Scholar]
  35. Kishore, D.M.; Bindu, S.; Manjunath, N.K. Estimation of Yoga Postures Using Machine Learning Techniques. Int. J. Yoga 2022, 15, 137–143. [Google Scholar] [CrossRef] [PubMed]
  36. Lafayette, T.B.G.; Kunst, V.H.L.; Melo, P.V.S.; Guedes, P.O.; Teixeira, J.M.X.N.; Vasconcelos, C.R.; Teichrieb, V.; da Gama, A.E.F. Validation of Angle Estimation Based on Body Tracking Data from RGB-D and RGB Cameras for Biomechanical Assessment. Sensors 2022, 23, 3. [Google Scholar] [CrossRef]
  37. Clarkson, H.M. Joint Motion and Function Assessment: A Research Based Practical Guide; Lippincott Williams & Wilkins: Philadelphia, PA, USA, 2005; pp. 60–61. [Google Scholar]
  38. Yan, J.; Xu, Y.; Cheng, Q.; Jiang, S.; Wang, Q.; Xiao, Y.; Ma, C.; Yan, J.; Wang, X. LightGBM: Accelerated genomically designed crop breeding through ensemble learning. Genome. Biol. 2021, 22, 271. [Google Scholar] [CrossRef] [PubMed]
  39. Ji, P.; Wang, X.; Ma, F.; Feng, J.; Li, C. A 3D Hand Attitude Estimation Method for Fixed Hand Posture Based on Dual-View RGB Images. Sensors 2022, 22, 8410. [Google Scholar] [CrossRef] [PubMed]
  40. Fisher, A.; Rudin, C.; Dominici, F. All Models are Wrong, but Many are Useful: Learning a Variable’s Importance by Studying an Entire Class of Prediction Models Simultaneously. J. Mach. Learn. Res. 2019, 20, 177. [Google Scholar] [PubMed]
  41. Lundberg, S.M.; Lee, S. A unified approach to interpreting model predictions. In Proceedings of the Advances in Neural Information Processing Systems 30 (NIPS), Long Beach, CA, USA, 4–9 December 2017. [Google Scholar]
  42. Chicco, D.; Warrens, M.J.; Jurman, G. The coefficient of determination R-squared is more informative than SMAPE, MAE, MAPE, MSE and RMSE in regression analysis evaluation. PeerJ Comput. Sci. 2021, 7, e623. [Google Scholar] [CrossRef]
  43. Fusca, M.; Negrini, F.; Perego, P.; Magoni, L.; Molteni, F.; Andreoni, G. Validation of a wearable IMU system for gait analysis: Protocol and application to a new system. Appl. Sci. 2018, 8, 1167. [Google Scholar] [CrossRef]
  44. Sheu, R.K.; Pardeshi, M.S. A Survey on Medical Explainable AI (XAI): Recent Progress, Explainability Approach, Human Interaction and Scoring System. Sensors 2022, 22, 8068. [Google Scholar] [CrossRef]
  45. American Academy of Orthopaedic Surgeons. Joint Motion: Method of Measuring and Recording; American Academy of Orthopaedic Surgeons: Chicago, IL, USA, 1965. [Google Scholar]
Figure 1. Measurement of UG.
Figure 1. Measurement of UG.
Sensors 23 06445 g001
Figure 2. Wristband with a strong magnet.
Figure 2. Wristband with a strong magnet.
Sensors 23 06445 g002
Figure 3. The camera position for recording.
Figure 3. The camera position for recording.
Sensors 23 06445 g003
Figure 4. MediaPipe landmarks.
Figure 4. MediaPipe landmarks.
Sensors 23 06445 g004
Figure 5. Example of joint detection by MediaPipe.
Figure 5. Example of joint detection by MediaPipe.
Sensors 23 06445 g005
Figure 6. Workflow of data acquisition and machine learning.
Figure 6. Workflow of data acquisition and machine learning.
Sensors 23 06445 g006
Figure 7. Various parameters. Numbers are explained in Table 1.
Figure 7. Various parameters. Numbers are explained in Table 1.
Sensors 23 06445 g007
Figure 8. Heatmap of each parameter. Warm colors indicate a positive correlation, whereas cool colors signify a negative correlation; true_angle exhibits a positive correlation with rtshoulder_3Dabduction, rtshoulderAbduction, and rtelbowhip_distratio and a negative correlation with ltarm_distratio.
Figure 8. Heatmap of each parameter. Warm colors indicate a positive correlation, whereas cool colors signify a negative correlation; true_angle exhibits a positive correlation with rtshoulder_3Dabduction, rtshoulderAbduction, and rtelbowhip_distratio and a negative correlation with ltarm_distratio.
Sensors 23 06445 g008
Figure 9. (a) Significance of permutation features of the LightGBM model. Top-three essential features: rtarm_distratio, rtshoulder_3Dangle, and rtshoulder_distratio. (b) SHAP value of the LightGBM model. Top three essential features were rtshoulder_3Dabduction, rtarm_distratio, and rtshoulder_3Dangle. Warm colors denote a positive impact on model performance, whereas cool colors indicate a negative impact.
Figure 9. (a) Significance of permutation features of the LightGBM model. Top-three essential features: rtarm_distratio, rtshoulder_3Dangle, and rtshoulder_distratio. (b) SHAP value of the LightGBM model. Top three essential features were rtshoulder_3Dabduction, rtarm_distratio, and rtshoulder_3Dangle. Warm colors denote a positive impact on model performance, whereas cool colors indicate a negative impact.
Sensors 23 06445 g009
Table 1. Parameters used for the training of each machine learning model.
Table 1. Parameters used for the training of each machine learning model.
Estimation of Shoulder
Abduction at Fixed Camera Position
Estimation of Camera
Installation Position
Model
Estimaton of Shoulder
Abduction Model at Any
Camera Installation
Position
arm_distratio ①/②hip_distraio ④/②arm_distratio ①/②
elbowhip_distratio ③/②uppertrunkAngle ⑧, ⑨elbowhip_distratio ③/②
hip_distraio ④/②lowertrunkAngle ⑩, ⑪shoulderAbduction ⑥
shoulder_distraio ⑤/②faceAngle ⑫, ⑬shoulder_3Dabduction ⑥
shoulderAbduction ⑥trunksizeshoulderAngle ⑦
shoulder_3Dabduction ⑥ shoulder_3Dangle ⑦
shoulderAngle ⑦ estimate_camAngle
shoulder_3Dangle ⑦
Table 2. The LightGBM model was more accurate for all camera angles.
Table 2. The LightGBM model was more accurate for all camera angles.
Camera Angle (°)
−30−150153045
Linear regression
 correlation coefficient0.9930.9910.9920.9980.9960.998
 R20.9860.9810.9840.9950.9930.995
 MAPE (%)12.32011.7589.2816.1439.39210.780
LightGBM
 correlation coefficient1.0001.0001.0001.0000.9990.999
 R20.9990.9990.9991.0000.9980.998
 MAPE (%)0.6120.9780.6860.3221.7061.516
Table 3. Correlation coefficients of each parameter with actual angles.
Table 3. Correlation coefficients of each parameter with actual angles.
ParameterCorrelation Coefficientp Value
cam_angle0.0100<0.01
rtshoulderAngle0.748<0.01
rtshoulderAbduction0.978<0.01
rtshoulder_3Dangle0.774<0.01
rtshoulder_3Dabduction0.982<0.01
ltshoulderAngle0.551<0.01
ltshoulderAbduction0.170<0.01
ltshoulder_3Dangle0.839<0.01
ltshoulder_3Dabduction0.302<0.01
rtelbowhip_distratio0.977<0.01
ltelbowhip_distratio0.133<0.01
rtarm_distratio−0.856<0.01
ltarm_distratio0.175<0.01
rtshoulder_distratio−0.783<0.01
ltshoulder_distratio0.175<0.01
rthip_distratio−0.860<0.01
lthip_distratio0.0828<0.01
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kusunose, M.; Inui, A.; Nishimoto, H.; Mifune, Y.; Yoshikawa, T.; Shinohara, I.; Furukawa, T.; Kato, T.; Tanaka, S.; Kuroda, R. Measurement of Shoulder Abduction Angle with Posture Estimation Artificial Intelligence Model. Sensors 2023, 23, 6445. https://doi.org/10.3390/s23146445

AMA Style

Kusunose M, Inui A, Nishimoto H, Mifune Y, Yoshikawa T, Shinohara I, Furukawa T, Kato T, Tanaka S, Kuroda R. Measurement of Shoulder Abduction Angle with Posture Estimation Artificial Intelligence Model. Sensors. 2023; 23(14):6445. https://doi.org/10.3390/s23146445

Chicago/Turabian Style

Kusunose, Masaya, Atsuyuki Inui, Hanako Nishimoto, Yutaka Mifune, Tomoya Yoshikawa, Issei Shinohara, Takahiro Furukawa, Tatsuo Kato, Shuya Tanaka, and Ryosuke Kuroda. 2023. "Measurement of Shoulder Abduction Angle with Posture Estimation Artificial Intelligence Model" Sensors 23, no. 14: 6445. https://doi.org/10.3390/s23146445

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop