Special Issue on Pre-Trained Language Models
Published 16 December, 2020
The release of ELMo, BERT and GPT in 2018 indicated the success of pre-trained language models (PLMs), and was followed by a great breakthrough in natural language understanding and generation. Many works have been done to explore more efficient and effective architectures for pre-training; for example, methods to improve pre-trained language models with cross-modal data, cross-lingual data, and structured knowledge, etc., or to innovatively apply PLMs in various NLP-related tasks.
This special issue is devoted to gathering and presenting cutting-edge reviews, research and applications of PLMs, providing a platform for researchers to share their recent observations and achievements in this active field.
Topics Covered:
- Novel architectures and algorithms of PLMs
- Generative PLMs
- Fine-tuning and adaptation of PLMs
- Multi-tasking and continual learning of PLMs
- Knowledge-guided PLMs
- Cross-lingual or multi-lingual PLMs
- Cross-modal PLMs
- Knowledge distillation and model compression of PLMs
- Analysis and probing of PLMs
- Applications of PLMs in various areas, such as information retrieval, social computation, and recommendation
Submission Instructions:
Papers submitted to this journal for possible publication must be original and must not be under consideration for publication in any other journals. Extended work must have a significant number of "new and original" ideas/contributions along with more than 30% "brand new" material.
Please read the Guide for Authors before submitting. All articles should be submitted online; please select SI: Pre-Trained Language Models on submission.
Guest Editors:
- Dr. Zhiyuan Liu, Tsinghua University, China. Email: liuzy@tsinghua.edu.cn
- Dr. Xipeng Qiu, Fudan University, China. Email: xpqiu@fudan.edu.cn
- Dr. Jie Tang, Tsinghua University, China. Email: jietang@tsinghua.edu.cn