Document Type : Original Research Paper


1 Kharazmi International Campus, Shahrood University, Shahrood, Iran.

2 Kharazmi International Campus Shahrood University Shahrood, Iran


< p>Background and Objectives: Discourse coherence modeling evaluation becomes a critical but challenging task for all content analysis tasks in Natural Language Processing subfields, such as text summarization, question answering, text generation and machine translation. Existing methods like entity-based and graph-based models are engaging in semantic and linguistic concepts of a text. It means that the problem cannot be solved very well and these methods are only very limited to available word co-occurrence information in the sequential sentences within a short part of a text. One of the greatest challenges of the above methods is their limitation in long documents coherence evaluation and being suitable for documents with low number of sentences.
Methods: Our proposed method focuses on both local and global coherence. It can also assess the local topic integrity of text at the paragraph level regardless of word meaning and handcrafted rules. The global coherence in the proposed method is evaluated by sequence paragraph dependency. According to the derived results in word embeddings, by applying statistical approaches, the presented method incorporates the external word correlation knowledge into short and long stories to assess both local and global coherence, simultaneously.
Results: Using the effect of combined word2vec vectors and most likely n-grams, we show that our proposed method is independent of the language and its semantic concepts. The derived results indicate that the proposed method offers the higher accuracy with respect to the other algorithms, in long documents with a high number of sentences.
Conclusion: Our current study, comparing our proposed method with BGSEG method showed that the mean degree of coherence evaluation 1.19 percent improvement. The results in this study also indicate improvement results are much more in larger texts with more sentences.

©2018 The author(s). This is an open access article distributed under the terms of the Creative Commons Attribution (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, as long as the original authors and source are cited. No permission is required from the authors or the publishers.


Main Subjects

[1] Z. Lin, C. Liu, H. T. Ng, M.-Y. Kan, “Combining coherence models and machine translation evaluation metrics for summarization evaluation,” in Proc. The 50th Annual Meeting of the Association for Computational Linguistics, 1: 1006–1014, Jeju, Republic of Korea, 2012.

[2] D. Xiong, Y. Ding, M. Zhang, C. L. Tan, “Lexical chain based cohesion models for document-level statistical machine translation,” in Proc. 2013 Conference on Empirical Methods in Natural Language Processing: 1563–1573, Washington, USA, 2013.

[3] D. Xiong, M. Zhang, X. Wang, “Topic-based coherence modeling for statistical machine translation,” Trans. Audio, Speech and Lang., 23(3): 483–493, 2015.

[4] H. J. Fox, “Phrasal cohesion and statistical machine translation,” in Proc. The Conference on Empirical Methods in Natural Language Processing (EMNLP), Philadelphia, USA: 304-311, 2002.

[5] H. Yannakoudakis, T. Briscoe, “Modeling coherence in ESOL learner texts,” in Proc. The Seventh Workshop on Building Educational Applications Using NLP, Montreal, Canada: 33-43, 2012.

[6] J. Burstein, J. Tetreault, S. Andreyev, “Using entity-based features to model coherence in student essays,” in Proc. NAACL-HLT, California, USA: 681-684, 2010.

[7] D. Higgins, J. Burstin, D. Marcu, C. Gentile, “Evaluating multiple aspects of coherence in student essays,” in Proc. NAACL-HLT, Boston, USA: 185-192, 2004.

[8] A. Celikyilmaz, D. Hakkani-Tur, “Discovery of topically coherent sentences for extractive summarization,” in Proc. The 49th Annual Meeting of the Association for Computational Linguistics, Portland, Oregon, USA: 491–499, 2011.

[9] D. Parveen, M. Strube, “Integrating importance, non-redundancy and coherence in graph-based extractive summarization,” in Proc. The Twenty-Fourth International Joint Conference on Artificial Intelligence (IJCAI), Buenos Aires, Argentina: 1298–1304, 2015.

[10] R. Zhang, “Sentence ordering driven by local and global coherence for summary generation,” in Proc. The ACL-HLT 2011 Student Session, Portland, OR, USA: 6–11, 2011.

[11] M. A. K. Halliday, R. Hasan, “Cohesion in English,” London, Longman, 1976.

[12] B. J. Grosz, A. K. Joshi, S. Weinstein, “Centering: A framework for modeling the local coherence of discourse, "Computational Linguistics, 21(2): 203–225, 1995.

[13] I. Tapiero, “Situation models and levels of coherence: towards a definition of comprehension,” Routledge, First edition,: 252, 2014.

[14] P. W. Foltz, W. Kintsch, T. K. Landauer, “The measurement of textual coherence with latent semantic analysis,” Discourse Processes, 25(2-3): 285-307, 1998.

[15] R. Barzilay, M. Lapata, “Modeling local coherence: An entity-based approach,” in Proc. ACL '05 the 43rd Annual Meeting on Association for Computational Linguistics, Michigan, USA: 141-148, 2005.

[16] R. Barzilay, M. Lapata, “Modeling local coherence: An entity-based approach,” Computational Linguistics, 34: 1-34, 2008.

[17] S. Somasundaran, J. Burstein, M. Chodorow, “Lexical chaining for measuring discourse coherence quality in test-taker essays,” in Proc. COLING the 25th International Conference on Computational Linguistics: Technical Papers, Dublin, Ireland: 950–961, 2014.

[18] V. W. Feng, G. Hirst, “Extending the entity-based coherence model with multiple ranks,” in Proc. EACL, Avignon, France: 315-324, 2012.

[19] C. Petersen, C. Lioma, J. G. Simonsen, B. Larsen, “Entropy and graph based modeling of document coherence using discourse entities: An application to IR,” in Proc. ICTIR, Northampton, MA, USA: 191–200, 2015.

[20] M. Zhang, V. W. Feng, B. Qin, G. Hirst, T. Liu, J. Huang, “Encoding world knowledge in the evaluation of local coherence,” in Proc. NAACL HLT, Denver, Colorado, USA: 1087–1096, 2015.

[21] Z. H. Lin, H. T. Ng, M. Y. Kan, “Automatically evaluating text coherence using discourse relations,” in Proc. ACL-11, Portland, USA: 997-1006, 2011.

[22] A. Louis , A. Nenkova, “A coherence model based on syntactic patterns,” in Proc. EMNLP-CNLL, Jeju Island, Korea: 1157-1168, 2012.

[23] R. Iida, T. Tokunaga, “A metric for evaluating discourse coherence based on co-reference resolution,” in Proc. COLING, Mumbai, India, pp.483-494, 2012.

[24] M. Elsner, E. Charniak, “Co-reference-inspired coherence modeling,” in Proc. ACL-08, Ohio, USA: 41-44, 2008.

[25] R. Barzilay, L. Lee, “Catching the drift: probabilistic content models, with applications to generation and summarization,” in Proc. NAACL-HLT: 113-120, 2004.

[26] M. Elsner, J. Austerweil, E. Charniak, “A unified local and global model for discourse coherence,” in Proc. NAACL, New York, USA: 436-443, 2007.

[27] F. Xu, Q. Zhu, G. Zhou, M. Wang, “Cohesion-driven discourse coherence modeling,” Journal of Chinese Information Processing, 28(3): 11-21, 2014.

[28] K. Filippova, M. Strube, “Extending the entity-grid coherence model to semantically related entities,” in Proc. ENLG '07 the Eleventh European Workshop on Natural Language Generation: 139-142, 2007.

[29] M. Lapata, R. Barzilay, “Automatic evaluation of text coherence: models and representations,” in Proc. The 19th International Joint Conference on Artificial Intelligence, Scotland, UK: 1085-1090, 2005.

[30] F. Xu, S. Du, “An entity-driven recursive neural network model for Chinese discourse coherence modeling,” International Journal of Artificial Intelligence and Applications (IJAIA), 8(2): 1-9, 2017.

[31] C. Lioma, F. Tarissan, J. Grue Simonsen, C. Petersen, B. Larsen, “Exploiting the bipartite structure of entity grids for document coherence and retrieval,” presented at the 2nd ACM International Conference on the Theory of Information, Newark, United States, 2016.         

[32] C. Guinaudeau, M. Strube, “Graph-based Local Coherence Modeling,” in Proc.  The 51st Annual Meeting of the Association for Computational Linguistics, Sofia, Bulgaria: 93–103, 2013.

[33] M. Mesgar, M. Strube “Graph-based coherence modeling for assessing readability,” in Proc. The Fourth Joint Conference on Lexical and Computational Semantics, Denver, USA): 309–318, 2015.

[34] R. Soricut, D. Marcu, “Discourse generation using utility-trained coherence models,” in Proc. The COLING/ACL on Main conference poster sessions, Sydney, Australia: 803-810, 2006.

[35] L. Jiwei, E. Hovy, “A model of coherence based on distributed sentence representation,” in Proc. The 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar: 2039-2048, 2014.

[36] L. Jiwei, D. Jurafsky, “Neural net models for open-domain discourse coherence,” in Proc. The 2017 Conference on Empirical Methods in Natural Language Processing: 198–209, 2017.

[37] L. Logeswaran, H. Lee, D. Radev, “Sentence ordering using recurrent neural networks,” arXiv preprint arXiv:1611.02654,  2016.

[38] R. Rosenfeld, “A maximum entropy approach to adaptive statistical language modeling,” Computer Speech & Language, 10(3): 187–228, 1996.

[39] M. Abdolahi, M. Zahedi, “Text coherence new method using word2vec sentence vectors and most likely n-grams,” presented at the 3rd Iranian conference and intelligent systems on signal processing (ICSPIS), Shahrood, Iran, 2017.

[40] Y. Bengio, R. Ducharme, P. Vincent, C. Janvin. “A neural probabilistic language model,” Journal of Machine Learning Research, 3: 1137–1155, 2003.

[41] R. Johnson, T. Zhang, “Effective use of word order for text categorization with convolutional neural networks,” in Proc. The 2015 Annual Conference of the North American Chapter of the ACL, Denver, USA: 103–112, 2015.

[42] R. Johnson, T. Zhang, “Semi-supervised convolutional neural networks for text categorization via region embedding,” in Proc. Advances in Neural Information Processing Systems (NIPS 2015): 919-927, 2015.

[43] T. H. Nguyen, R. Grishman, “Relation extraction: Perspective from convolutional neural networks,” in Proc. Workshop on Vector Space Modeling for NLP at NAACL 2015, Denver, USA: 39–48, 2015.

[44] N. Kalchbrenner, E. Grefenstette, P. Blunsom “A convolutional neural network for modelling sentences,” in Proc. The 52nd Annual Meeting of the Association for Computational Linguistics Acl, Baltimore, USA: 655–665, 2014.

[45]   Y. Kim, “Convolutional neural networks for sentence classification,” in Proc. The 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar: 1746–1751, 2014.

[46] Y. Zhang, B. Wallace, “A sensitivity analysis of (and practitioners' guide to) convolutional neural networks for sentence classification,” in Proc. The 8th International Joint Conference on Natural Language Processing, Taipei, Taiwan: 253-263, 2017.

[47] F. Yaghmaee, M. Kamyar, “Introducing new trends for Persian CAPTCHA,” Journal of Electrical and Computer Engineering Innovations (JECEI), 4(2): 119-126, 2016.

[48]   T. Mikolov, I. Sutskever, “Distributed representations of words and phrases and their compositionality,” in Proc. NIPS 2013, Nevada, USA: 3111–3119, 2013.

[49] J. Pennington, R. Socher, C. D. Manning, “GloVe: global vectors forward representation,” in Proc. The 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar: 1532–1543, 2014.

[50] M. J. Kusner, Y. Sun, N. I. Kolkin, K. Q. Weinberger, “From word embeddings to document distances,” in Proc. The 32nd International Conference on Machine Learning, JMLR: W&CP 37, Lille, France: 957-966, 2015.

[51] M. Abdolahi, M. Zahedi, “Sentence matrix normalization using most likely n-grams vector,” presented at the 2017 IEEE 4th International Conference on Knowledge-Based Engineering and Innovation (KBEI), Tehran, Iran, 2017.


Journal of Electrical and Computer Engineering Innovations (JECEI) welcomes letters to the editor for the post-publication discussions and corrections which allows debate post publication on its site, through the Letters to Editor. Letters pertaining to manuscript published in JECEI should be sent to the editorial office of JECEI within three months of either online publication or before printed publication, except for critiques of original research. Following points are to be considering before sending the letters (comments) to the editor.

[1] Letters that include statements of statistics, facts, research, or theories should include appropriate references, although more than three are discouraged.

[2] Letters that are personal attacks on an author rather than thoughtful criticism of the author’s ideas will not be considered for publication.

[3] Letters can be no more than 300 words in length.

[4] Letter writers should include a statement at the beginning of the letter stating that it is being submitted either for publication or not.

[5] Anonymous letters will not be considered.

[6] Letter writers must include their city and state of residence or work.

[7] Letters will be edited for clarity and length.