Document Type : Original Research Paper


Department of Communication Engineering, Faculty of Electrical and Computer Engineering, University of Birjand, Birjand, Iran.


Background and Objectives: Video processing is one of the essential concerns generally regarded over the last few years. Social group detection is one of the most necessary issues in crowd. For human-like robots, detecting groups and the relationship between members in groups are important. Moving in a group, consisting of two or more people, means moving the members of the group in the same direction and speed.
Methods: Deep neural network (DNN) is applied for detecting social groups in the proposed method using the parameters including Euclidean distance, Proximity distance, Motion causality, Trajectory shape, and Heat-maps. First, features between pairs of all people in the video are extracted, and then the matrix of features is made. Next, the DNN learns social groups by the matrix of features.
Results: The goal is to detect two or more individuals in social groups. The proposed method with DNN and extracted features detect social groups. Finally, the proposed method’s output is compared with different methods.
Conclusion: In the latest years, the use of deep neural networks (DNNs) for learning and detecting has been increased. In this work, we used DNNs for detecting social groups with extracted features. The indexing consequences and the outputs of movies characterize the utility of DNNs with extracted features.

©2021 The author(s). This is an open access article distributed under the terms of the Creative Commons Attribution (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, as long as the original authors and source are cited. No permission is required from the authors or the publishers.


Main Subjects

[1] S. Guo, Q. Bai, S. Gao, Y. Zhang, A. Li, "An analysis method of crowd abnormal behavior for video service robot," IEEE Access, 7: 169577 – 169585, 2019.

[2] S.M. Hosseini, H. Farsi, H.S. Yazdi, "Best clustering around the color images," Int. J. Comput. Electr. Eng., 1(1): 20-24, 2009.

[3] P. Etezadifar, H. Farsi, "A new sample consensus based on sparse coding for improved matching of SIFT features on remote sensing images," IEEE Trans. Geosci. Remote Sens., 58(8): 5254-5263, 2020.

[4] A. Jalali, H. Farsi, "A new steganography algorithm based on video sparse representation," Multimed. Tool. Appl., 79(3): 1821-1846, 2020.

[5] R. Nasiripour, H. Farsi, S. Mohamadzadeh, "Visual saliency object detection using sparse learning," IET Image Proc., 13(13): 2436-2447, 2019.

[6] M. Hasheminejad, H. Farsi, "Sample-specific late classifier fusion for speaker verification," Multimed. Tool. Appl., 77(12): 15273-15289, 2018.

[7] H. Farsi, R. Nasiripour, S. Mohammadzadeh, "Eye gaze detection based on learning automata by using SURF descriptor," J. Inf. Syst. Telecommun., 6(1): 41-49, 2018.

[8] M. Hasheminejad, H. Farsi, "Frame level sparse representation classification for speaker verification," Multimed. Tool. Appl., 76(20): 21211-21224, 2017.

[9] H. Farsi, R. Nasiripour, S. Mohammadzadeh, "Improved generic object retrieval in large scale databases by SURF descriptor," J. Inf. Syst. Telecommun., 5(2): 128-137, 2017.

[10] O.P. Popoola, K. Wang, "Video-based abnormal human behavior recognition—A review," IEEE Trans. Syst. Man Cybern. Part C Appl. Rev., 42(6): 865-878, 2012.

[11] H. Farsi, "Improvement of minimum tracking in minimum statistics noise estimation method," Signal Process. Int. J. (SPIJ), 4(1):  17-22, 2010.

[12] F. Solera, S. Calderara, E. Ristani, C. Tomasi, R. Cucchiara, "Tracking social groups within and across cameras," IEEE Trans. Circuits Syst. Video Technol., 27(3): 441-453, 2017.

[13] T. Iqbal, M. Moosaei, L. D. Riek, "Tempo adaptation and anticipation methods for human-robot teams," in Proc. RSS, Planning HRI: Shared Autonomy Collab. Robot. Workshop: 1-3, 2016.

[14] T. Iqbal, S. Rack, L.D. Riek, "Movement coordination in human–robot teams: a dynamical systems approach," IEEE Trans. Rob., 32(4): 909-919, 2016.

[15] X.-T. Truong, T.-D. Ngo, "“To approach humans?”: A unified framework for approaching pose prediction and socially aware robot navigation," IEEE Trans. Cognit. Dev. Syst., 10(3): 557-572, 2017.

[16] M. Vázquez, A. Steinfeld, S. E. Hudson, "Parallel detection of conversational groups of free-standing people and tracking of their lower-body orientation," in Proc. Intelligent Robots and Systems (IROS), IEEE/RSJ International Conference,: 3010-3017, 2015.

[17] H. Farsi, M. Mozaffarian, H. Rahmani, "Improving voice activity detection used in ITU-hina, 2009.

[18] S. Vascon, M. Pelihina, 2009.

[18] S. Vascon, M. Pelillo, "Detecting conversational groups in images using clustering games," in Multimodal Behavior Analysis in the Wild: Elsevier, 12: 247-267, 2019.

[19] S. Tang, B. Andres, M. Andriluka, B. Schiele, "Multi-person tracking by multicut and deep matching," in Proc. European Conference on Computer Vision: 100-111, 2016.

[20] T. Li, H. Chang, M. Wang, B. Ni, R. Hong, S. Yan, "Crowded scene analysis: A survey," IEEE Trans. Circuits Syst. Video Technol., 25(3): 367-386, 2015.

[21]   H.S. Park, J. Shi, "Social saliency prediction," in Proc. Computer Vision and Pattern Recognition (CVPR), IEEE Conference: 4777-4785, 2015.

[22] K.N. Tran, A. Bedagkar-Gala, I.A. Kakadiaris, S. K. Shah, "Social cues in group formation and local interactions for collective activity analysis," in Proc. International Conference on Computer Vision Theory and Applications (VISAPP): 539-548, 2013.

[23] F. Solera, S. Calderara, R. Cucchiara, "Socially constrained structural learning for groups detection in crowd," IEEE Trans. Pattern Anal. Mach. Intell., 38(5): 995-1008, 2016.

[24] W. Ge, R.T. Collins, R.B. Ruback, "Vision-based analysis of small groups in pedestrian crowds," IEEE Trans. Pattern Anal. Mach. Intell., 34(5): 1003-1016, 2012.

[25] S. Huang, D. Huang, M.A. Khuhroa, "Social pedestrian group detection based on spatiotemporal-oriented energy for crowd video understanding," KSII Trans. Internet Inf. Syst. , 12(8): 3769-3789, 2018.

[26] J. Shao, C.C. Loy, X. Wang, "Learning scene-independent group descriptors for crowd understanding," IEEE Trans. Circuits Syst. Video Technol.,  27(6): 1290-1303, 2017.

[27]  M. Swofford, J.C. Peruzzi, M. Vázquez, R. Martín-Martín, S. Savarese, "DANTE: deep affinity network for clustering conversational interactants," arXiv preprint arXiv:1907.12910, 2019.

[28] T. Fernando, S. Denman, S. Sridharan, C. Fookes, "GD-GAN: generative adversarial networks for trajectory prediction and group detection in crowds," in Proc. Asian Conference on Computer Vision,: 314-330, 2018.

[29] A. Sezavar, H. Farsi, S. Mohamadzadeh, "A modified grasshopper optimization algorithm combined with cnn for content based image retrieval," Int. J. Eng., 32(7): 924-930, 2019.

[30] A. Sezavar, H. Farsi, S. Mohamadzadeh, "Content-based image retrieval by combining convolutional neural networks and sparse representation," Multimed. Tool. Appl., 78: 20895-20912, 2019.

[31] Y. LeCun, Y. Bengio, G. Hinton, "Deep learning," Nature, 521: 436-444 , 2015.

[32] B. Zhao, J. Feng, X. Wu, S. Yan, "A survey on deep learning-based fine-grained object classification and semantic segmentation," Int. J. Autom. Comput., 14(2): 119-135, 2017.


Journal of Electrical and Computer Engineering Innovations (JECEI) welcomes letters to the editor for the post-publication discussions and corrections which allows debate post publication on its site, through the Letters to Editor. Letters pertaining to manuscript published in JECEI should be sent to the editorial office of JECEI within three months of either online publication or before printed publication, except for critiques of original research. Following points are to be considering before sending the letters (comments) to the editor.

[1] Letters that include statements of statistics, facts, research, or theories should include appropriate references, although more than three are discouraged.

[2] Letters that are personal attacks on an author rather than thoughtful criticism of the author’s ideas will not be considered for publication.

[3] Letters can be no more than 300 words in length.

[4] Letter writers should include a statement at the beginning of the letter stating that it is being submitted either for publication or not.

[5] Anonymous letters will not be considered.

[6] Letter writers must include their city and state of residence or work.

[7] Letters will be edited for clarity and length.