Let's read some papers!!!
Please see the project description section 7.3
Each person will be responsible for reading a paper, figuring out the principles, and applying them to our model.
Please write your name after each sub-section to get assigned. And also, assign yourself to the task at the bottom of this page.
For example:
7.3.1 Additional Pretraining
-
How to Fine-Tune BERT for Text Classification? Sun et al. [2020] @sun.qumeng
7.3.2 Multiple Negative Ranking Loss Learning
-
Efficient Natural Language Response Suggestion for Smart Reply Henderson et al. [2017]
7.3.3 Cosine-Similarity Fine-Tuning
-
Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks Reimers and Gurevych [2019]
7.3.4 Fine-Tuning with Regularized Optimization
-
SMART: Robust and Efficient Fine-Tuning for Pre-trained Natural Language Models Principled Regularized Optimization Jiang et al. [2020]
7.3.5 Multitask Fine-Tuning
-
BERT and PALs: Projected Attention Layers for Efficient Adaptation in Multi-Task Learning. Stickland and Murray [2019] -
MTRec: Multi-Task Learning over BERT for News Recommendation. Bi et al. [2022] @sun.qumeng -
Gradient surgery for multi-task learning. Yu et al. [2020]
7.3.6 Contrastive Learning
-
Simple Contrastive Learning of Sentence Embeddings. Gao et al. [2021]
Qumeng assigned Yasir to be in charge of the issue.
Edited by Qumeng Sun