Mode-Unified Intent Estimation of a Robotic Prosthesis using Deep-Learning.
Traditional robotic knee-ankle prostheses categorize ambulation modes such as level walking, ramps, and stairs. However, human movement scales continuously across various states rather than discretely, making traditional mode classifiers inadequate for accurate intent recognition. This paper proposes a mode-unified intent recognition strategy that continuously estimates terrain slopes across five modes: level ground, ramp ascent/descent, and stair ascent/descent. Locomotion data from 16 individuals with transfemoral amputation were utilized to train slope estimation and mode classification models based on deep temporal convolutional networks. The proposed method was compared to the traditional mode classifier via offline test, using leave-one-subject-out validations for the user-independent performance. The mode-unified slope estimator achieved an MAE of 1.68 ± 0.60 degrees, outperforming the mode classifier's MAE of 1.94 ± 0.97 degrees (p<0.05). The lower slope estimation errors resulted in higher accuracy in replicating knee kinematics of able-bodied subjects, with the proposed system achieving an average MAE of 5.13 ± 2.00 degrees for knee clearance and 6.74 ± 2.97 degrees for knee contact angle, compared to the traditional classifier's 12.10 ± 5.20 degrees and 13.80 ± 3.28 degrees (p<0.01), respectively, in stair ascent. These results suggest that our mode-unified approach can enable continuous adjustment to terrains without mode classification.