Context-to-Session Matching: Utilizing Whole Session for Response Selection in Information-Seeking Dialogue Systems
Published in SIGKDD, 2020
Download here
Published in SIGKDD, 2020
Download here
Published in EMNLP, 2019
Download here
Published in EMNLP, 2019
Download here
Published in AAAI, 2019
Download here
Published in IJCAI, 2018
In this paper, we consider the training data quality for the open-domain dialog systems. To address the noisy training data problem, we propose a generation with calibration framework to measure the qualities of the training instances and utilize the information to improve the training of the generation model. Experiments show that our framework outperforms the traditional generation models on both automatic evaluation and human evaluation metrics.
Download here
Published in IJCAI, 2018
In this paper, we propose an adversarial multi-task neural metric for multi-lingual dialogue evaluation, using shared feature extraction across languages. In addition, we incorporate adversarial strategy to shared spaces, which aims to guarantee the purity of shared feature spaces. Our proposed model regards models that trained in different language corpora as a single task and integrates each single task under the framework of adversarial multi-task learning. Experiments show that the proposed model outperforms the monolingual ones and various existing metrics.
Download here
Published in IJCAI, 2018
In this paper, we propose a Seq2Seq model with multi-head attention mechanism, which can attend to different semantic parts of an input query for the decoder to explicitly generate reply. We call it Multi-Head Attention Aware Dialog System (MHAM).
Download here