Reinforcement-Learning-Based Finite Time Fault Tolerant Control for a Manipulator With Actuator Faults.
This study introduces a novel finite time fault tolerant controller integrating nonsingular terminal sliding mode (NTSM) and reinforcement learning (RL) strategies for manipulator systems with actuator faults. Leveraging an actor-critic network architecture, the RL algorithm facilitates the computation of the cost function and the approximation of unknown nonlinear dynamics. The inherent properties of NTSM mitigate the effects of parameter uncertainties, thereby enhancing system robustness. Furthermore, an adaptive law is crafted to counteract the deleterious effects of actuator faults. Through the direct Lyapunov function approach, it is demonstrated that the closed-loop system achieves semi-global practical finite-time stability. This control strategy diminishes the dependence on precise model accuracy and augments the system's fault tolerance. The viability of the proposed algorithm is corroborated by simulation results, and its efficacy is further validated through experiments conducted on the 6-DOF Kinova Jaco 2 platform.