Language models are few-shot multilingual learners

GI Winata, A Madotto, Z Lin, R Liu, J Yosinski… - arXiv preprint arXiv …, 2021 - arxiv.org
General-purpose language models have demonstrated impressive capabilities, performing
on par with state-of-the-art approaches on a range of downstream natural language …

Neural path hunter: Reducing hallucination in dialogue systems via path grounding

N Dziri, A Madotto, O Zaïane, AJ Bose - arXiv preprint arXiv:2104.08455, 2021 - arxiv.org
Dialogue systems powered by large pre-trained language models (LM) exhibit an innate
ability to deliver fluent and natural-looking responses. Despite their impressive generation …

Towards information-rich, logical dialogue systems with knowledge-enhanced neural models

H Wang, B Guo, W Wu, S Liu, Z Yu - Neurocomputing, 2021 - Elsevier
Dialogue systems have made massive promising progress contributed by deep learning
techniques and have been widely applied in our life. However, existing end-to-end neural …

Retrieval-free knowledge-grounded dialogue response generation with adapters

Y Xu, E Ishii, S Cahyawijaya, Z Liu, GI Winata… - arXiv preprint arXiv …, 2021 - arxiv.org
To diversify and enrich generated dialogue responses, knowledge-grounded dialogue has
been investigated in recent years. The existing methods tackle the knowledge grounding …

CAiRE in DialDoc21: Data augmentation for information seeking dialogue system

Y Xu, E Ishii, GI Winata, Z Lin, A Madotto… - Proceedings of the …, 2021 - aclanthology.org
Abstract Information-seeking dialogue systems, including knowledge identification and
response generation, aim to respond to users with fluent, coherent, and informative …

Greenformers: Improving computation and memory efficiency in transformer models via low-rank approximation

S Cahyawijaya - arXiv preprint arXiv:2108.10808, 2021 - arxiv.org
In this thesis, we introduce Greenformers, a collection of model efficiency methods to
improve the model efficiency of the recently renowned transformer models with a low-rank …

A comparative study on language models for task-oriented dialogue systems

VM Andreas, GI Winata… - 2021 8th International …, 2021 - ieeexplore.ieee.org
The recent development of language models has shown promising results by achieving
state-of-the-art performance on various natural language tasks by fine-tuning pre-trained …

[PDF][PDF] High-quality dialogue diversification by intermittent short extension ensembles

Z Tang, H Kulkarni, GH Yang - Findings of the Association for …, 2021 - aclanthology.org
Many task-oriented dialogue systems use deep reinforcement learning (DRL) to learn
policies that respond to the user appropriately and complete the tasks successfully. Training …

High-quality diversification for task-oriented dialogue systems

Z Tang, H Kulkarni, GH Yang - arXiv preprint arXiv:2106.00891, 2021 - arxiv.org
Many task-oriented dialogue systems use deep reinforcement learning (DRL) to learn
policies that respond to the user appropriately and complete the tasks successfully. Training …

Partner personas generation for diverse dialogue generation

H Lu, W Lam, H Cheng, HM Meng - arXiv preprint arXiv:2111.13833, 2021 - arxiv.org
Incorporating personas information allows diverse and engaging responses in dialogue
response generation. Unfortunately, prior works have primarily focused on self personas …