Publications

Learning to Converse with Noisy Data: Generation with Calibration

Published in IJCAI, 2018

In this paper, we consider the training data quality for the open-domain dialog systems. To address the noisy training data problem, we propose a generation with calibration framework to measure the qualities of the training instances and utilize the information to improve the training of the generation model. Experiments show that our framework outperforms the traditional generation models on both automatic evaluation and human evaluation metrics.

Download here

One “Ruler” for All Languages: Multi-Lingual Dialogue Evaluation with Adversarial Multi-Task Learning

Published in IJCAI, 2018

In this paper, we propose an adversarial multi-task neural metric for multi-lingual dialogue evaluation, using shared feature extraction across languages. In addition, we incorporate adversarial strategy to shared spaces, which aims to guarantee the purity of shared feature spaces. Our proposed model regards models that trained in different language corpora as a single task and integrates each single task under the framework of adversarial multi-task learning. Experiments show that the proposed model outperforms the monolingual ones and various existing metrics.

Download here