In the field of natural language processing, pre-trained language models (Pre-trained Language Models) have become a very important basic technology. In order to further promote the research and development of Chinese information processing, Harbin Institute of Technology Xunfei Joint Laboratory (HFL) based on the self-developed knowledge distillation tool TextBrewer, combined with Whole Word Masking (Whole Word Masking) technology and Knowledge Distillation (Knowledge Distillation) technology launched Chinese Small pre-trained model MiniRBT. Chinese LERT | Chinese and English PERT | Chinese MacBERT | Chinese ELECTRA | Chinese XLNet | Chinese BERT |

#Chinese #small #pretraining #model #MiniRBT

Leave a Comment

Your email address will not be published. Required fields are marked *