Journal Home Online First Current Issue Archive For Authors Journal Information 中文版

Frontiers of Information Technology & Electronic Engineering >> 2017, Volume 18, Issue 4 doi: 10.1631/FITEE.1601232

Attention-based encoder-decoder model for answer selection in question answering

. College of Computer, National University of Defense Technology, Changsha 410073, China.. Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100093, China.. Luoyang Electronic Equipment Test Center, Luoyang 471003, China

Available online: 2017-05-12

Next Previous

Abstract

One of the key challenges for question answering is to bridge the lexical gap between questions and answers because there may not be any matching word between them. Machine translation models have been shown to boost the performance of solving the lexical gap problem between question-answer pairs. In this paper, we introduce an attention-based deep learning model to address the answer selection task for question answering. The proposed model employs a bidirectional long short-term memory (LSTM) encoder-decoder, which has been demonstrated to be effective on machine translation tasks to bridge the lexical gap between questions and answers. Our model also uses a step attention mechanism which allows the question to focus on a certain part of the candidate answer. Finally, we evaluate our model using a benchmark dataset and the results show that our approach outperforms the existing approaches. Integrating our model significantly improves the performance of our question answering system in the TREC 2015 LiveQA task.

Related Research