Bi-Xiao Wu, Chen-Guang Yang, Jun-Pei Zhong. Research on Transfer Learning of Vision-based Gesture Recognition[J]. Machine Intelligence Research, 2021, 18(3): 422-431. DOI: 10.1007/s11633-020-1273-9
Citation: Bi-Xiao Wu, Chen-Guang Yang, Jun-Pei Zhong. Research on Transfer Learning of Vision-based Gesture Recognition[J]. Machine Intelligence Research, 2021, 18(3): 422-431. DOI: 10.1007/s11633-020-1273-9

Research on Transfer Learning of Vision-based Gesture Recognition

  • Gesture recognition has been widely used for human-robot interaction. At present, a problem in gesture recognition is that the researchers did not use the learned knowledge in existing domains to discover and recognize gestures in new domains. For each new domain, it is required to collect and annotate a large amount of data, and the training of the algorithm does not benefit from prior knowledge, leading to redundant calculation workload and excessive time investment. To address this problem, the paper proposes a method that could transfer gesture data in different domains. We use a red-green-blue (RGB) Camera to collect images of the gestures, and use Leap Motion to collect the coordinates of 21 joint points of the human hand. Then, we extract a set of novel feature descriptors from two different distributions of data for the study of transfer learning. This paper compares the effects of three classification algorithms, i.e., support vector machine (SVM), broad learning system (BLS) and deep learning (DL). We also compare learning performances with and without using the joint distribution adaptation (JDA) algorithm. The experimental results show that the proposed method could effectively solve the transfer problem between RGB Camera and Leap Motion. In addition, we found that when using DL to classify the data, excessive training on the source domain may reduce the accuracy of recognition in the target domain.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return