Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Chinese Pre-Trained XLNet

This project provides a XLNet pre-training model for Chinese, which aims to enrich Chinese natural language processing resources and provide a variety of Chinese pre-training model selection. We welcome all experts and scholars to download and use this model.

This project is based on CMU/Google official XLNet: https://github.com/zihangdai/xlnet

You may also interested in,

More resources by HFL: https://github.com/ymcui/HFL-Anthology

Citation

If you find our resource or paper is useful, please consider including the following citation in your paper.

@inproceedings{cui-etal-2020-revisiting,
    title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
    author = "Cui, Yiming  and
      Che, Wanxiang  and
      Liu, Ting  and
      Qin, Bing  and
      Wang, Shijin  and
      Hu, Guoping",
    booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
    pages = "657--668",
}
Downloads last month
286
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Space using hfl/chinese-xlnet-base 1