Research

— Human-Computer Conversation —

DR-D3Q: Dynamic Reward-based Dueling Deep Dyna-Q (2019)
In this paper, we propose a new approach, called Dynamic Reward-based Dueling Deep Dyna-Q (DR-D3Q). The DR-D3Q can learn policies in noise robustly, and it is easy to implement by combining dynamic reward and the Dueling Deep Q-Network (Dueling DQN) into Deep Dyna-Q (DDQ) framework. Unlike typical dialogue reward function, we integrate dynamic reward that provides reward in real-time for agent to make Dueling DQN adapt to dialogue domain. For the purpose of supplementing the limited amount of real user experiences, we take the DDQ framework as the basic framework. Experiments using simulation and human evaluation show that the DR-D3Q significantly improve the performance of policy learning tasks in noisy environments.

Cross-Domain Sentiment Transfer (2019)
In recent years, several neural network based emotional text generation methods have been investigated. However, existing emotional text generation approaches often suffer from the problem of requiring large-scale annotated data. In order to solve the above problem, this paper propose a GAN-based cross-domain text sentiment transfer model, which uses annotated data from other domains to assist in the training of emotional text generation network. By combining adversarial reinforcement learning with supervised learning, our model is able to extract patterns of sentiment transformation and apply them in emotional text generation. Experimental results have shown that our approach outperforms the state-of-the-art methods, and is able to generate high quality emotional text while maintaining the consistency of domain information and content semantics.

EsCVAE: Learning to Converse Emotionally Like Humans (2018)

Emotional intelligence is one of the key parts of human intelligence. Exploring how to endow conversation models with emotional intelligence is a recent research hotspot. Although several emotional conversation approaches have been introduced, none of these methods were able to decide an appropriate emotion category for the response. In this paper, we propose the EsCVAE-I and EsCVAE-II model to learn to automatically specify a reasonable emotion category for response generation. We show that there are some frequent emotion interaction patterns in humans dialogue (e.g. happiness-like, angry-disgust), and our models are able to learn such frequent patterns and apply it to emotional conversation generation. Experiments show that our proposed approaches can generate appropriate emotion and yield significant improvements over the baseline methods in emotional conversation.


MECS: Multi-Emotional Conversation System (2017)
This paper describes our system designed for the NLPCC 2017 shared task on emotional conversation generation. Our model adopts a multi-task Seq2Seq learning framework to capture the textual information of post sequence and generate responses for each type of emotions simultaneously. Evaluation results suggest that our model is competitive on emotional generation, which achieves 0.9658 on average emotion accuracy. We also observe the emotional interaction in human conversation, and try to explain it as empathy at the psychological level. Finally, our model achieves 325 on total score, 0.545 on average score. Our model won the first place on generated-based approaches, and won the third place on the entire task (compared with retrieved-based methods).