Hostname: page-component-cd9895bd7-jn8rn Total loading time: 0 Render date: 2024-12-23T14:17:13.295Z Has data issue: false hasContentIssue false

Arabic community question answering

Published online by Cambridge University Press:  19 December 2018

PRESLAV NAKOV
Affiliation:
Arabic Language Technologies, Qatar Computing Research Institute, HBKU, HBKU Research Complex, PO box 5825, Doha, Qatar e-mails: [email protected], [email protected]
LLUÍS MÀRQUEZ
Affiliation:
Amazon, Carrer de Tànger 76, 08018, Barcelona, Spain e-mail: [email protected]
ALESSANDRO MOSCHITTI
Affiliation:
Amazon, 1240 Rosecrans Ave #120, Manhattan beach, CA 90266, USA e-mail: [email protected]
HAMDY MUBARAK
Affiliation:
Arabic Language Technologies, Qatar Computing Research Institute, HBKU, HBKU Research Complex, PO box 5825, Doha, Qatar e-mails: [email protected], [email protected]

Abstract

We analyze resources and models for Arabic community Question Answering (cQA). In particular, we focus on CQA-MD, our cQA corpus for Arabic in the domain of medical forums. We describe the corpus and the main challenges it poses due to its mix of informal and formal language, and of different Arabic dialects, as well as due to its medical nature. We further present a shared task on cQA at SemEval, the International Workshop on Semantic Evaluation, based on this corpus. We discuss the features and the machine learning approaches used by the teams who participated in the task, with focus on the models that exploit syntactic information using convolutional tree kernels and neural word embeddings. We further analyze and extend the outcome of the SemEval challenge by training a meta-classifier combining the output of several systems. This allows us to compare different features and different learning algorithms in an indirect way. Finally, we analyze the most frequent errors common to all approaches, categorizing them into prototypical cases, and zooming into the way syntactic information in tree kernel approaches can help solve some of the most difficult cases. We believe that our analysis and the lessons learned from the process of corpus creation as well as from the shared task analysis will be helpful for future research on Arabic cQA.

Type
Article
Copyright
Copyright © Cambridge University Press 2018 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

The authors would like to thank the anonymous reviewers for their constructive comments, which have helped us improve the quality of the paper. This research was performed by the Arabic Language Technologies (ALT) group at the Qatar Computing Research Institute (QCRI), HBKU, part of Qatar Foundation. It is part of the Interactive Systems for Answer Search (Iyas) project, which is developed in collaboration with MIT-CSAIL.

Work conducted while these authors were at QCRI and HBKU.

References

Abdelbaki, H., Shaheen, M., and Badawy, O. 2011. ARQA high-performance arabic question answering system. In Proceedings of the Arab Academy for Science and Technology and Maritime Transport, ALTIC ’11, Alexandria, Egypt, pp. 129–36.Google Scholar
Abouenour, S., 2011. On the improvement of passage retrieval in Arabic question/answering (Q/A) systems. In Proceedings of the International Conference on Application of Natural Language to Information Systems, Berlin, Germany, pp. 336–41.Google Scholar
Agichtein, E., Carmel, D., Harman, D., Pelleg, D., and Pinter, Y. 2015. Overview of the TREC 2015 LiveQA track. In Proceedings of the Text REtrieval Conference, TREC ’15, Gaithersburg, Maryland.Google Scholar
Akour, M., Abufardeh, S., Magel, K., and Al-Radaideh, Q., 2011. QArabPro: A rule based question answering system for reading comprehension tests in Arabic. American Journal of Applied Sciences 8 (6): 652–61.Google Scholar
Al Chalabi, H. M. 2015. Question Processing for Arabic Question Answering System. PhD Thesis, The British University in Dubai.Google Scholar
Allison, L., and Dix, T., 1986. A bit-string longest-common-subsequence algorithm. Information Processing Letters 23 (6): 305–10.Google Scholar
Balchev, D., Kiprov, Y., Koychev, I., and Nakov, P., 2016. PMI-cool at SemEval-2016 Task 3: experiments with PMI and goodness polarity lexicons for community question answering. In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval ’16, San Diego, California, pp. 844–50.Google Scholar
Barrón-Cedeño, A., Bonadiman, D., Da San Martino, G., Joty, S., Moschitti, A., Al Obaidli, F. A., Romeo, S., Tymoshenko, K., and Uva, A., 2016. ConvKN at SemEval-2016 task 3: answer and question selection for question answering on Arabic and English fora. In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval ’16, San Diego, California, pp. 896903.Google Scholar
Barrón-Cedeño, A., Filice, S., Da San Martino, G., Joty, S., Màrquez, L., Nakov, P., and Moschitti, A., 2015. Thread-level information for comment classification in community question answering. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, ACL-IJCNLP ’15, Beijing, China, pp. 687–93.Google Scholar
Barron-Cedeno, A., Da San Martino, G., Moschitti, A., Romeo, S., Belinkov, Y., Glass, J., Mohtarami, M., Hsu, W.-N., and Zhang, Y., 2016. Neural attention for learning to rank questions in community question answering. In Proceedings of the 26th International Conference on Computational Linguistics, COLING ’16, Osaka, Japan, pp. 1734–45.Google Scholar
Belinkov, Y., Mohtarami, M., Cyphers, S., and Glass, J., 2015. VectorSLU: a continuous word vector approach to answer selection in community question answering systems. In Proceedings of the 9th International Workshop on Semantic Evaluation, SemEval ’15, Denver, Colorado, pp. 282–7.Google Scholar
Benajiba, Y., and Rosso, P. 2007. Arabic Question Answering. Master’s thesis, Technical University of Valencia, Spain.Google Scholar
Boguraev, B., Patwardhan, S., Kalyanpur, A., Chu-carroll, J., and Lally, A. 2014. Parallel and nested decomposition for factoid questions. Natural Language Engineering, 20 (4): 441–68.Google Scholar
Chew, P. A., Bader, B. W., Helmreich, S., Abdelali, A., and Verzi, S. J. 2011. An information-theoretic, vector-space-model approach to cross-language information retrieval. Natural Lang. Eng. 17 (1): 3770.Google Scholar
Cormack, G. V., Clarke, C. L. A., and Buettcher, S., 2009. Reciprocal rank fusion outperforms condorcet and individual rank learning methods. In Proceedings of the 32nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’09, Boston, Massachusetts, pp. 758–9.Google Scholar
Cox, D. R., 1970. The Analysis of Binary Data. London: Chapman and Hall.Google Scholar
Da San Martino, G., Romeo, S., Barrón-Cedeño, A., Joty, S., Màrquez, L., Moschitti, A., and Nakov, P., 2017. Cross-language question re-ranking. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’17, Tokyo, Japan, pp. 1145–8.Google Scholar
Da San Martino, G., Romeo, S., Moschitti, A., and Barrón-Cedeño, A., 2016. Selecting sentences vs. selecting tree constituents for automatic question ranking. In Proceedings of the 26th International Conference on Computational Linguistics, COLING ’16, Osaka, Japan, pp. 2515–25.Google Scholar
El Adlouni, Y., Lahbari, I., Rodríguez, H., Meknassi, M., and El Alaoui, S. O., 2016. UPC-USMBA at SemEval-2016 Task 3: UPC-USMBA participation in SemEval 2016 task 3, subtask D: CQA for Arabic. In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval ’16, San Diego, California, pp. 275–9.Google Scholar
Ezzeldin, A., and Shaheen, M., 2012. A survey of Arabic question answering: Challenges, tasks, approaches, tools, and future trends. In Proceedings of the 13th International Arab Conference on Information Technology, ACIT ’12, Amman, Jordan, pp. 1–8.Google Scholar
Filice, S., Croce, D., Moschitti, A., and Basili, R., 2016. KeLP at SemEval-2016 task 3: learning semantic relations between questions and answers. In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval ’16, San Diego, California, pp. 1116–23.Google Scholar
Filice, S., Da San Martino, G., and Moschitti, A., 2015. Structural representations for learning relations between pairs of texts. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, ACL-IJCNLP ’15, Beijing, China, pp. 1003–13.Google Scholar
Friedman, J. H. 2001. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29 (5): 1189–232.Google Scholar
Green, S., and Manning, C. D., 2010. Better Arabic parsing: baselines, evaluations, and analysis. In Proceedings of the 23rd International Conference on Computational Linguistics, COLING ’10, Beijing, China, pp. 394402.Google Scholar
Guzmán, F., Joty, S., Màrquez, L., and Nakov, P., 2015. Pairwise neural machine translation evaluation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, ACL-IJCNLP ’15, Beijing, China, pp. 805–14.Google Scholar
Guzmán, F., Màrquez, L., and Nakov, P. 2016a. Machine translation evaluation meets community question answering. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL ’16, Berlin, Germany, pp. 460–6.Google Scholar
Guzmán, F., Nakov, P., and Màrquez, L. 2016b. MTE-NN at SemEval-2016 Task 3: can machine translation evaluation help community question answering? In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval ’16, San Diego, California, pp. 887–95.Google Scholar
Hammo, B., Abu-Salem, H., and Lytinen, S., 2002. QARAB: a question answering system to support the Arabic language. In Proceedings of the ACL-02 Workshop on Computational Approaches to Semitic Languages, SEMITIC ’02, Philadelphia, Pennsylvania, pp. 111.Google Scholar
Hoogeveen, D., Verspoor, K. M., and Baldwin, T., 2015. CQADupStack: a benchmark data set for community question-answering research. In Proceedings of the 20th Australasian Document Computing Symposium, ADCS ’15, Sydney, Australia, pp. 18.Google Scholar
Hoque, E., Joty, S., Màrquez, L., Barrón-Cedeño, A., Da San Martino, G., Moschitti, A., Nakov, P., Romeo, S., and Carenini, G., 2016. An interactive system for exploring community question answering forums. In Proceedings of the 26th International Conference on Computational Linguistics, COLING ’16, Osaka, Japan, pp. 15.Google Scholar
Jaccard, P., 1901. Étude comparative de la distribution florale dans une portion des Alpes et des Jura. Bulletin del la Société Vaudoise des Sciences Naturelles 37 : 547–79.Google Scholar
Joachims, T. 1999. Making large-scale support vector machine learning practical. In Schölkopf, B., Burges, C. J. C., and Smola, A. J., (eds.), Advances in Kernel Methods, pp. 169–84. MIT Press.Google Scholar
Joachims, T., 2002. Optimizing search engines using clickthrough data. In Proceedings of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’02, Edmonton, Canada, pp. 133–42.Google Scholar
Joty, S., Barrón-Cedeño, A., Da San Martino, G., Filice, S., Màrquez, L., Moschitti, A., and Nakov, P., 2015. Global thread-level inference for comment classification in community question answering. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP ’16, Lisbon, Portugal, pp. 573–8.Google Scholar
Joty, S., Màrquez, L., and Nakov, P., 2016. Joint learning with global inference for comment classification in community question answering. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT ’16, San Diego, California, pp. 703–13.Google Scholar
Joty, S., Nakov, P., Màrquez, L., and Jaradat, I., 2017. Cross-language learning with adversarial neural networks. In Proceedings of the 21st Conference on Computational Natural Language Learning, CoNLL ’17, Vancouver, Canada, pp. 226–37.Google Scholar
Kanaan, G., and Hammouri, A. 2009. A new question answering system for the Arabic language. In Proceedings of the American Journal of Applied Sciences 6 (4): 797805.Google Scholar
Karadzhov, G., Nakov, P., Màrquez, L., Barrón-Cedeño, A., and Koychev, I., 2017. Fully automated fact checking using external sources. In Proceedings of the International Conference on Recent Advances in Natural Language Processing, RANLP ’17, Varna, Bulgaria, pp. 344–53.Google Scholar
Lei, T., Joshi, H., Barzilay, R., Jaakkola, T., Tymoshenko, K., Moschitti, A., and Màrquez, L., 2016. Semi-supervised question retrieval with gated convolutions. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT ’16, San Diego, California, pp. 1279–89.Google Scholar
Magooda, A., Gomaa, A., Mahgoub, A., Ahmed, H., Rashwan, M., Raafat, H., Kamal, E., and Al Sallab, A., 2016. RDI at SemEval-2016 Task 3: RDI unsupervised framework for text ranking. In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval ’16, San Diego, California, pp. 822–7.Google Scholar
Malhas, R., Torki, M., and Elsayed, T., 2016. QU-IR at SemEval-2016 Task 3: learning to rank on Arabic community question answering forums with word embedding. In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval ’16, San Diego, California, pp. 866–71.Google Scholar
Mendes, A. C., and Coheur, L., 2013. When the answer comes into question in question-answering: survey and open issues. Natural Language Engineering 19 (1): 132.Google Scholar
Mihalcea, R., and Tarau, P., 2004. TextRank: bringing order into texts. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’04, Barcelona, Spain, pp. 404–11.Google Scholar
Mihaylov, T., Balchev, D., Kiprov, Y., Koychev, I., and Nakov, P., 2017. Large-scale goodness polarity lexicons for community question answering. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’17, Tokyo, Japan, pp. 1185–8.Google Scholar
Mihaylov, T., Georgiev, G., and Nakov, P. 2015a. Finding opinion manipulation trolls in news community forums. In Proceedings of the 19th Conference on Computational Natural Language Learning, CoNLL ’15, Beijing, China, pp. 310–4.Google Scholar
Mihaylov, T., Koychev, I., Georgiev, G., and Nakov, P. 2015b. Exposing paid opinion manipulation trolls. In Proceedings of the International Conference Recent Advances in Natural Language Processing, RANLP ’15, Hissar, Bulgaria, pp. 443–50.Google Scholar
Mihaylov, T., Mihaylova, T., Nakov, P., Màrquez, L., Georgiev, G., and Koychev, I., 2018. The dark side of news community forums: opinion manipulation trolls. Internet Research 28 (5): 12921312.Google Scholar
Mihaylov, T., and Nakov, P. 2016a. Hunting for troll comments in news community forums. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL ’16, Berlin, Germany, pp. 399–405.Google Scholar
Mihaylov, T., and Nakov, P. 2016b. SemanticZ at SemEval-2016 Task 3: ranking relevant answers in community question answering using semantic similarity based on fine-tuned word embeddings. In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval ’16, San Diego, California, pp. 879–86.Google Scholar
Mihaylova, T., Gencheva, P., Boyanov, M., Yovcheva, I., Mihaylov, T., Hardalov, M., Kiprov, Y., Balchev, D., Koychev, I., Nakov, P., Nikolova, I., and Angelova, G., 2016. SUper team at SemEval-2016 task 3: building a feature-rich system for community question answering. In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval ’16, San Diego, California, pp. 836–43.Google Scholar
Mihaylova, T., Nakov, P., Màrquez, L., Barrón-Cedeño, A., Mohtarami, M., Karadjov, G., and Glass, J., 2018. Fact checking in community forums. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, AAAI ’18, New Orleans, Louisiana, pp. 879–86.Google Scholar
Mikolov, T., Kombrink, S., Burget, L., Černocký, J., and Khudanpur, S., 2011. Extensions of recurrent neural network language model. In Proceedings of the 2011 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP ’11, Prague, Czech Republic, pp. 5528–31.Google Scholar
Mikolov, T., Yih, W.-T., and Zweig, G., 2013. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT ’13, Atlanta, Georgia, pp. 746–51.Google Scholar
Mohtarami, M., Belinkov, Y., Hsu, W.-N., Zhang, Y., Lei, T., Bar, K., Cyphers, S., and Glass, J., 2016. SLS at SemEval-2016 task 3: neural-based approaches for ranking in community question answering. In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval ’16, San Diego, California, pp. 828–35.Google Scholar
Monz, C., 2011. Machine learning for query formulation in question answering. Natural Language Engineering 17 (4): 425–54.Google Scholar
Moschitti, A., 2006. Efficient convolution kernels for dependency and constituent syntactic trees. In Proceedings of the 17th European Conference on Machine Learning, ECML ’06, Berlin, Germany, pp. 318–29.Google Scholar
Moschitti, A., Màrquez, L., Nakov, P., Agichtein, E., Clarke, C., and Szpektor, I., 2016. SIGIR 2016 Workshop WebQA II: web question answering beyond factoids. In Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’16, Pisa, Italy, pp. 1251–2.Google Scholar
Nakov, P., Hoogeveen, D., Màrquez, L., Moschitti, A., Mubarak, H., Baldwin, T., and Verspoor, K. 2017a. SemEval-2017 task 3: community question answering. In Proceedings of the 11th International Workshop on Semantic Evaluation, SemEval ’17, Vancouver, Canada, pp. 27–48.Google Scholar
Nakov, P., Màrquez, L., and Guzmán, F. 2016a. It takes three to tango: triangulation approach to answer ranking in community question answering. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP ’16, Austin, Texas, pp. 1586–97.Google Scholar
Nakov, P., Màrquez, L., Magdy, W., Moschitti, A., Glass, J., and Randeree, B., 2015. SemEval-2015 task 3: answer selection in community question answering. In Proceedings of the 9th International Workshop on Semantic Evaluation, SemEval ’15, Denver, Colorado, pp. 269–81.Google Scholar
Nakov, P., Màrquez, L., Moschitti, A., Magdy, W., Mubarak, H., Freihat, A. A., Glass, J., and Randeree, B. 2016b. SemEval-2016 task 3: community question answering. In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval ’16, San Diego, California, pp. 525–45.Google Scholar
Nakov, P., Mihaylova, T., Màrquez, L., Shiroya, Y., and Koychev, I. 2017b. Do not trust the trolls: predicting credibility in community question answering forums. In Proceedings of the International Conference on Recent Advances in Natural Language Processing, RANLP ’17, Varna, Bulgaria, pp. 551–60.Google Scholar
Nicosia, M., Filice, S., Barrón-Cedeño, A., Saleh, I., Mubarak, H., Gao, W., Nakov, P., Da San Martino, G., Moschitti, A., Darwish, K., Màrquez, L., Joty, S., and Magdy, W., 2015. QCRI: answer selection for community question answering – experiments for Arabic and English. In Proceedings of the 9th International Workshop on Semantic Evaluation, SemEval ’15, Denver, Colorado, pp. 203–9.Google Scholar
Qiu, X., and Huang, X., 2015. Convolutional neural tensor network architecture for community-based question answering. In Proceedings of International Joint Conference on Artificial Intelligence, IJCAI ’15, Buenos Aires, Argentina, pp. 1305–11.Google Scholar
Ray, S. K., and Shaalan, K., 2016. A review and future perspectives of Arabic question answering systems. IEEE Transactions on Knowledge and Data Engineering 28 (12): 3169–90.Google Scholar
Romeo, S., Da San Martino, G., Belinkov, Y., Barrn-Cedeo, A., Eldesouki, M., Darwish, K., Mubarak, H., Glass, J., and Moschitti, A. 2017. Language processing and learning models for community question answering in Arabic. Information Processing & Management. https://www.sciencedirect.com/science/article/abs/pii/S0306457316306720Google Scholar
Rosso, P., Lyhyaoui, A., Peñarrubia, J., Montes y Gómez, M., Benajiba, Y., and Raissouni, N. 2005. Arabic-English question answering. In Proceedings of the Information Communication Technologies International Symposium, ICTIS ’05, Tetuan, Morocco.Google Scholar
Salem, Z., Sadek, J., Chakkour, F., and Haskkour, N., 2010. Automatically finding answers to “why” and “how to” questions for Arabic language. In Proceedings of the International Conference on Knowledge-Based and Intelligent Information and Engineering Systems, KES ’10, Cardiff, United Kingdom, pp. 586–93.Google Scholar
dos Santos, C., Barbosa, L., Bogdanova, D., and Zadrozny, B., 2015. Learning hybrid representations to retrieve semantically equivalent questions. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, ACL-IJCNLP ’15, Beijing, China, pp. 694–9.Google Scholar
Severyn, A., and Moschitti, A., 2012. Structural relationships for large-scale learning of answer re-ranking. In Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’12, Portland, Oregon, pp. 741–50.Google Scholar
Severyn, A., and Moschitti, A., 2013. Automatic feature engineering for answer selection and extraction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’13, Seattle, Washington, pp. 458–67.Google Scholar
Severyn, A., and Moschitti, A., 2015. Learning to rank short text pairs with convolutional deep neural networks. In Proceedings of the 38th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’15, Santiago, Chile, pp. 373–82.Google Scholar
Severyn, A., Moschitti, A., Tsagkias, M., Berendsen, R., and de Rijke, M., 2014. A syntax-aware re-ranker for microblog retrieval. In Proceedings of the 37th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’14, Gold Coast, Australia, pp. 1067–70.Google Scholar
Shen, Y., Rong, W., Sun, Z., Ouyang, Y., and Xiong, Z., 2015. Question/answer matching for CQA system via combining lexical and sequential information. In Proceedings of the 29th AAAI Conference on Artificial Intelligence, AAAI ’15, Austin, Texas, pp. 275–81.Google Scholar
Silberztein, M., 2005. NOOJ: a linguistic annotation system for corpus processing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, HLT/EMNLP ’05, Vancouver, Canada, pp. 10–1.Google Scholar
Tigelaar, A. S., Op Den Akker, R., and Hiemstra, D., 2010. Automatic summarisation of discussion fora. Natural Language Engineering 16 (2): 161–92.Google Scholar
Trigui, O., Belguith, H. L., and Rosso, P., 2010. DefArabicQA: arabic definition question answering system. In Proceedings of the LREC Workshop on Language Resources and Human Language Technologies for Semitic Languages, Valletta, Malta, pp. 40–5.Google Scholar
Tymoshenko, K., and Moschitti, A., 2015. Assessing the impact of syntactic and semantic structures for answer passages reranking. In Proceedings of the 24th ACM International on Conference on Information and Knowledge Management, CIKM ’15, Melbourne, Australia, pp. 1451–60.Google Scholar
Vishwanathan, S. V. N., Schraudolph, N. N., Kondor, R., and Borgwardt, K. M., 2010. Graph kernels. Journal of Machine Learning Research 11 : 1201–42.Google Scholar
Wang, D., and Nyberg, E., 2015. A long short-term memory model for answer sentence selection in question answering. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, ACL-IJCNLP ’15, Beijing, China, pp. 707–12.Google Scholar
Wise, M., 1996. YAP3: improved detection of similarities in computer program and other texts. In Proceedings of the 27th SIGCSE Technical Symposium on Computer Science Education, SIGCSE ’96, New York, NY, pp. 130–4.Google Scholar
Wissal, B., Ellouze, M., Mesfar, S., and Belguith, L. H., 2009. An Arabic question-answering system for factoid questions. In Proceedings of the IEEE International Conference on Natural Language Processing and Knowledge Engineering, NLP-KE ’09, Dalian, China, pp. 17.Google Scholar
Wu, Y., and Zhang, M., 2016. ICL00 at SemEval-2016 task 3: translation-based method for CQA system. In Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval ’16, San Diego, California, pp. 857–60.Google Scholar
Zhou, G., He, T., Zhao, J., and Hu, P., 2015. Learning continuous word embedding with metadata for question retrieval in community question answering. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, ACL-IJCNLP ’15, Beijing, China, pp. 250–9.Google Scholar