The release of ELMo, BERT and GPT in 2018 indicated the success of pre-trained language models (PLMs), and was followed by a great breakthrough in natural language understanding and generation. Many works have been done to explore more efficient and effective architectures for pre-training; for example, methods to improve pre-trained language models with cross-modal data, cross-lingual data, and structured knowledge, etc., or to innovatively apply PLMs in various NLP-related tasks.
This special issue is devoted to gathering and presenting cutting-edge reviews, research and applications of PLMs, providing a platform for researchers to share their recent observations and achievements in this active field.
- Novel architectures and algorithms of PLMs
- Generative PLMs
- Fine-tuning and adaptation of PLMs
- Multi-tasking and continual learning of PLMs
- Knowledge-guided PLMs
- Cross-lingual or multi-lingual PLMs
- Cross-modal PLMs
- Knowledge distillation and model compression of PLMs
- Analysis and probing of PLMs
- Applications of PLMs in various areas, such as information retrieval, social computation, and recommendation
Papers submitted to this journal for possible publication must be original and must not be under consideration for publication in any other journals. Extended work must have a significant number of "new and original" ideas/contributions along with more than 30% "brand new" material.