[HTML][HTML] Few-shot learning for medical text: A review of advances, trends, and opportunities

Y Ge, Y Guo, S Das, MA Al-Garadi, A Sarker - Journal of Biomedical …, 2023 - Elsevier
Background: Few-shot learning (FSL) is a class of machine learning methods that require
small numbers of labeled instances for training. With many medical topics having limited …

Few-shot learning for named entity recognition in medical text

M Hofer, A Kormilitzin, P Goldberg… - arXiv preprint arXiv …, 2018 - arxiv.org
Deep neural network models have recently achieved state-of-the-art performance gains in a
variety of natural language processing (NLP) tasks (Young, Hazarika, Poria, & Cambria …

CancerGPT for few shot drug pair synergy prediction using large pretrained language models

T Li, S Shetty, A Kamath, A Jaiswal, X Jiang… - NPJ Digital …, 2024 - nature.com
Large language models (LLMs) have been shown to have significant potential in few-shot
learning across various fields, even with minimal training data. However, their ability to …

Learning to few-shot learn across diverse natural language classification tasks

T Bansal, R Jha, A McCallum - arXiv preprint arXiv:1911.03863, 2019 - arxiv.org
Self-supervised pre-training of transformer models has shown enormous success in
improving performance on a number of downstream tasks. However, fine-tuning on a new …

Gpt-3 models are poor few-shot learners in the biomedical domain

M Moradi, K Blagec, F Haberl, M Samwald - arXiv preprint arXiv …, 2021 - arxiv.org
Deep neural language models have set new breakthroughs in many tasks of Natural
Language Processing (NLP). Recent work has shown that deep transformer language …

STraTA: Self-training with task augmentation for better few-shot learning

T Vu, MT Luong, QV Le, G Simon, M Iyyer - arXiv preprint arXiv …, 2021 - arxiv.org
Despite their recent successes in tackling many NLP tasks, large-scale pre-trained language
models do not perform as well in few-shot settings where only a handful of training examples …

RAFT: A real-world few-shot text classification benchmark

N Alex, E Lifland, L Tunstall, A Thakur… - arXiv preprint arXiv …, 2021 - arxiv.org
Large pre-trained language models have shown promise for few-shot learning, completing
text-based tasks given only a few task-specific examples. Will models soon solve …

Perfect: Prompt-free and efficient few-shot learning with language models

RK Mahabadi, L Zettlemoyer, J Henderson… - arXiv preprint arXiv …, 2022 - arxiv.org
Current methods for few-shot fine-tuning of pretrained masked language models (PLMs)
require carefully engineered prompts and verbalizers for each new task to convert examples …

Towards few-shot fact-checking via perplexity

N Lee, Y Bang, A Madotto, M Khabsa… - arXiv preprint arXiv …, 2021 - arxiv.org
Few-shot learning has drawn researchers' attention to overcome the problem of data
scarcity. Recently, large pre-trained language models have shown great performance in few …

Revisiting self-training for few-shot learning of language model

Y Chen, Y Zhang, C Zhang, G Lee, R Cheng… - arXiv preprint arXiv …, 2021 - arxiv.org
As unlabeled data carry rich task-relevant information, they are proven useful for few-shot
learning of language model. The question is how to effectively make use of such data. In this …

Сродни търсения