Pre-training Extractive Question-Answer Prompts for Few-Shot Chinese Text Classification
- DOI
- 10.2991/978-94-6463-222-4_34How to use a DOI?
- Keywords
- few-shot learning; prompt; multi-task learning; text classification
- Abstract
In recent years, pre-training models (PLMs) have made impressive progress, and prompt learning has made few-shot learning achievable. However, traditional prompt learning methods often require manual template design, or performance may be unstable due to the limited data in few-shot tasks. To address these issues, we propose a few-shot text classification method based on multi-task learning. We first unify the multi-task into an extractive question-answering (EQA) format, then train the prompt using task data in the unified format. The prompt cists of modular prompts and a router that indicates their functionality. We then initonsialize the downstream training parameters using the router of a pre-training task similar to the downstream task and employ contrastive learning to improve EQA efficiency.
- Copyright
- © 2023 The Author(s)
- Open Access
- Open Access This chapter is licensed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
Cite this article
TY - CONF AU - Gaojian Ding AU - Shuang Zheng AU - Quanmin Wang PY - 2023 DA - 2023/08/28 TI - Pre-training Extractive Question-Answer Prompts for Few-Shot Chinese Text Classification BT - Proceedings of the 2023 2nd International Conference on Artificial Intelligence, Internet and Digital Economy (ICAID 2023) PB - Atlantis Press SP - 318 EP - 326 SN - 2589-4919 UR - https://doi.org/10.2991/978-94-6463-222-4_34 DO - 10.2991/978-94-6463-222-4_34 ID - Ding2023 ER -