Bowen Chen, Xiao Ding, Yi Zhao, Bo Fu, Tingmao Lin, Bing Qin, Ting Liu. Text Difficulty Study: Do Machines Behave the Same as Humans Regarding Text Difficulty?[J]. Machine Intelligence Research, 2024, 21(2): 283-293. DOI: 10.1007/s11633-023-1424-x
Citation: Bowen Chen, Xiao Ding, Yi Zhao, Bo Fu, Tingmao Lin, Bing Qin, Ting Liu. Text Difficulty Study: Do Machines Behave the Same as Humans Regarding Text Difficulty?[J]. Machine Intelligence Research, 2024, 21(2): 283-293. DOI: 10.1007/s11633-023-1424-x

Text Difficulty Study: Do Machines Behave the Same as Humans Regarding Text Difficulty?

  • With the emergence of pre-trained models, current neural networks are able to give task performance that is comparable to humans. However, we know little about the fundamental working mechanism of pre-trained models in which we do not know how they approach such performance and how the task is solved by the model. For example, given a task, human learns from easy to hard, whereas the model learns randomly. Undeniably, difficulty-insensitive learning leads to great success in natural language processing (NLP), but little attention has been paid to the effect of text difficulty in NLP. We propose a human learning matching index (HLM Index) to investigate the effect of text difficulty. Experiment results show: 1) LSTM gives more human-like learning behavior than BERT. Additionally, UID-SuperLinear gives the best evaluation of text difficulty among four text difficulty criteria. Among nine tasks, some tasks′ performance is related to text difficulty, whereas others are not. 2) Model trained on easy data performs best in both easy and medium test data, whereas trained on hard data only performs well on hard test data. 3) Train the model from easy to hard, leading to quicker convergence.
  • loading

Catalog

    /

    DownLoad:  Full-Size Img  PowerPoint
    Return
    Return