[HTML][HTML] Few-shot learning for medical text: A review of advances, trends, and opportunities
Background: Few-shot learning (FSL) is a class of machine learning methods that require
small numbers of labeled instances for training. With many medical topics having limited …
small numbers of labeled instances for training. With many medical topics having limited …
Few-shot learning for named entity recognition in medical text
Deep neural network models have recently achieved state-of-the-art performance gains in a
variety of natural language processing (NLP) tasks (Young, Hazarika, Poria, & Cambria …
variety of natural language processing (NLP) tasks (Young, Hazarika, Poria, & Cambria …
CancerGPT for few shot drug pair synergy prediction using large pretrained language models
Large language models (LLMs) have been shown to have significant potential in few-shot
learning across various fields, even with minimal training data. However, their ability to …
learning across various fields, even with minimal training data. However, their ability to …
Learning to few-shot learn across diverse natural language classification tasks
Self-supervised pre-training of transformer models has shown enormous success in
improving performance on a number of downstream tasks. However, fine-tuning on a new …
improving performance on a number of downstream tasks. However, fine-tuning on a new …
Gpt-3 models are poor few-shot learners in the biomedical domain
Deep neural language models have set new breakthroughs in many tasks of Natural
Language Processing (NLP). Recent work has shown that deep transformer language …
Language Processing (NLP). Recent work has shown that deep transformer language …
STraTA: Self-training with task augmentation for better few-shot learning
Despite their recent successes in tackling many NLP tasks, large-scale pre-trained language
models do not perform as well in few-shot settings where only a handful of training examples …
models do not perform as well in few-shot settings where only a handful of training examples …
RAFT: A real-world few-shot text classification benchmark
Large pre-trained language models have shown promise for few-shot learning, completing
text-based tasks given only a few task-specific examples. Will models soon solve …
text-based tasks given only a few task-specific examples. Will models soon solve …
Perfect: Prompt-free and efficient few-shot learning with language models
RK Mahabadi, L Zettlemoyer, J Henderson… - arXiv preprint arXiv …, 2022 - arxiv.org
Current methods for few-shot fine-tuning of pretrained masked language models (PLMs)
require carefully engineered prompts and verbalizers for each new task to convert examples …
require carefully engineered prompts and verbalizers for each new task to convert examples …
Towards few-shot fact-checking via perplexity
Few-shot learning has drawn researchers' attention to overcome the problem of data
scarcity. Recently, large pre-trained language models have shown great performance in few …
scarcity. Recently, large pre-trained language models have shown great performance in few …
Revisiting self-training for few-shot learning of language model
As unlabeled data carry rich task-relevant information, they are proven useful for few-shot
learning of language model. The question is how to effectively make use of such data. In this …
learning of language model. The question is how to effectively make use of such data. In this …