Original Research Paper
Data Mining
Fatemeh Akbari; Eynollah Khanjari
Abstract
Background and Objectives: So far, several methods have been proposed to detect communities, which indicate the high importance of discovering communities for understanding social networks and detecting useful and hidden patterns in the network. The goal of such analyses is to find a group of users with ...
Read More
Background and Objectives: So far, several methods have been proposed to detect communities, which indicate the high importance of discovering communities for understanding social networks and detecting useful and hidden patterns in the network. The goal of such analyses is to find a group of users with common characteristics. Basically, social networks are considered as graphs, so the analysis is also done using graph methods, in which nodes represent individuals and edges represent relationships between them. Since community detection is an NP-complete problem, several meta-heuristic approaches have been used to tackle this problem, mainly considering "modularity" as the objective function. In most approaches, modularity has been used, which suffers from the limitation of resolution and cannot detect communities that are small in size and consider it in combination with large communities.Methods: In this paper, a new hybrid algorithm of bee colony and genetics is proposed for community detection which performs optimization using the "balanced modularity" fitness function. In this algorithm, parallel processing is used to speed up optimization, genetic algorithm is used to create the initial population, and genetic operators are used in the search by bees.Results: Experiments on well-known real-world networks, including karate, American football, dolphins, and political books, have shown that our method provides more accurate results than the state-of-the-art community detection methods.Conclusion: The combined optimization of bee colony and genetics not only provides globally optimal solution but also it does not need prior information about the number as well as the structure of communities.
Original Research Paper
Image Processing
Morteza Akbari; Seyyed Mohammad Razavi; Sajad Mohamadzadeh
Abstract
Background and Objectives: Multi-object tracking in dense, multi-camera environments remains challenging due to occlusions, lighting variations, and fragmented trajectories. While existing methods rely on hierarchical two-step approaches or complex Bayesian filters, they often fail to fully exploit spatio-temporal ...
Read More
Background and Objectives: Multi-object tracking in dense, multi-camera environments remains challenging due to occlusions, lighting variations, and fragmented trajectories. While existing methods rely on hierarchical two-step approaches or complex Bayesian filters, they often fail to fully exploit spatio-temporal correlations or to approach global consistency across cameras and frames. This study aims to address these limitations by proposing a novel graph-based deep learning model for continuous person tracking that independently optimizes spatial and temporal associations.Methods: The proposed model decomposes multi-camera tracking into two tasks: temporal association (linking objects across frames using velocity and time) and spatial association (aligning objects from multiple viewpoints). A spatio-temporal graph structure is constructed, with nodes representing detected objects and edges encoding relationships. Message Passing Networks (MPNs) iteratively update node and edge features, while a graph consensus fusion module merges spatial and temporal graphs for robust tracking. The model is trained using Focal Loss and evaluated on the Wildtrack and CAMPUS datasets.Results: The model achieves state-of-the-art performance, with a MOTA score of 85.5% on Wildtrack and 77.4–87.4% on CAMPUS subsets. Key improvements include a 100% MT (mostly tracked) rate and 0% ML (mostly lost) rate on CAMPUS, demonstrating exceptional robustness in occluded and crowded scenes. The IDF1 score (87.2%) highlights superior identity preservation. The decoupled design reduces graph size, which improves scalability.Conclusion: By decoupling spatial and temporal associations and leveraging graph-based optimization, the proposed model significantly enhances tracking accuracy and reliability in multi-camera settings. This work provides a framework for applications like surveillance and autonomous systems, with future potential for attention mechanisms and adaptive graph integration.
Original Research Paper
Analogue Integrated Circuits
Atousa Gholami Boorkheyli; Majid Babaeinik; Hadi Dehbovid; Vahid Ghods
Abstract
Background and Objectives: This research aims to optimize component placement in integrated systems using evolutionary algorithms. The primary goal is to generate a compact floorplan while satisfying design constraints, particularly in analog circuits where symmetry and proximity constraints are critical ...
Read More
Background and Objectives: This research aims to optimize component placement in integrated systems using evolutionary algorithms. The primary goal is to generate a compact floorplan while satisfying design constraints, particularly in analog circuits where symmetry and proximity constraints are critical to minimizing coupling interference and enhancing performance. The study proposes using a convolutional neural network (CNN) to extract these placement constraints, with its parameters optimized via the non-dominated sorting genetic algorithm III (NSGA-III). Additionally, a hybrid routing approach combining simulated annealing (SA) and NSGA-III is introduced to improve routing efficiency through multi-objective optimization.Methods: The placement constraints, including symmetry and proximity requirements, are extracted using a CNN, whose parameters are optimized by NSGA-III. For routing, a hybrid approach is employed where SA generates initial routing solutions, which are then refined by NSGA-III for multi-objective optimization. The proposed method is implemented on a two-stage recycling folded cascade (RFC) amplifier in 0.18μm CMOS technology with a 1.8V supply voltage. A dedicated MATLAB toolbox is developed to facilitate placement while adhering to design rules using optimization algorithms.Results: Simulation results confirm the effectiveness of the proposed methodology, demonstrating optimized placement and routing with improved circuit performance. The combination of CNN and NSGA-III successfully generates a compact and efficient layout, while the hybrid routing approach (SA + NSGA-III) enhances the routing process. The RFC amplifier case study shows better utilization of physical resources and performance improvements, validating the method's efficiency.Conclusion: This study demonstrates that the proposed method, integrating evolutionary algorithms and CNN, effectively optimizes placement and routing in integrated systems. The CNN-based constraint extraction and NSGA-III optimization enable compact layouts, while the hybrid routing approach improves multi-objective optimization. Simulations on the RFC amplifier confirm enhanced circuit performance and resource utilization. This method offers significant advantages over traditional approaches and is applicable to complex and industrial designs.
Original Research Paper
Arash Kosari
Abstract
Background and Objectives: Quantum Key Distribution (QKD) ensures secure communication through quantum mechanics, but real-world implementations face vulnerabilities from detector blinding, time-shift, and side-channel attacks. While Measurement-Device-Independent QKD (MDI-QKD) mitigates detector vulnerabilities, ...
Read More
Background and Objectives: Quantum Key Distribution (QKD) ensures secure communication through quantum mechanics, but real-world implementations face vulnerabilities from detector blinding, time-shift, and side-channel attacks. While Measurement-Device-Independent QKD (MDI-QKD) mitigates detector vulnerabilities, it lacks real-time attack monitoring and struggles with finite-key limitations. This study presents an MDI ack QKD protocol that integrates deterministic acknowledgment pulses and multi-intensity decoy states to achieve robust, device-independent security with real-time attack detection.Methods: The proposed protocol combines MDI-QKD’s device-independent framework with interleaved deterministic acknowledgment pulses and four-level decoy intensities. Alice and Bob generate weak coherent pulses with randomized phases, embedding acknowledgment pulses with probability ( Pd = 0.1 ) to probe channel integrity. An untrusted relay performs Bell-state measurements using superconducting nanowire single-photon detectors (SNSPDs). Multi-intensity decoy statistics enable finite-key parameter estimation, while integrated photonic platforms ensure scalability. Security is analyzed using the universally composable framework, with simulations and preliminary experiments conducted over metropolitan fiber distances.Results: Numerical simulations demonstrate secure key rates exceeding 10 Mbps at 50 km and ~1 Mbps at 100 km under realistic conditions (0.2 dB/km fiber loss, 85% detector efficiency, 1 GHz pulse rate). Experimental tests on an integrated photonic chip at 1550 nm achieved raw key rates of 1.1 Mbps at 50 km with decoy accuracy within ±7%. Deterministic acknowledgments detected blinding attacks with high sensitivity, and multi-intensity decoys provided tight finite-key bounds, maintaining composable security against collective and coherent attacks.Conclusion: The MDI ack QKD protocol achieves high-rate, device-independent quantum key distribution with real-time attack monitoring, offering a scalable solution for metropolitan quantum networks. Its compatibility with integrated photonics enables compact, stable implementations, while deterministic acknowledgments and multi-intensity decoys ensure robust security against evolving threats. This approach paves the way for practical, unconditionally secure communication systems, with potential for satellite-ground and multi-node network extensions.
Original Research Paper
Wireless Communications
Mojtaba Hajiabadi; Naaser Neda; Amir Moradband Toroghi
Abstract
Background and Objectives: This research addresses the issue of channel estimation and beamforming in systems with Reconfigurable Intelligent Surface (RIS). RIS is able to significantly improve coverage by controlling the phase and amplitude of the reflected signals through nearly passive elements. This ...
Read More
Background and Objectives: This research addresses the issue of channel estimation and beamforming in systems with Reconfigurable Intelligent Surface (RIS). RIS is able to significantly improve coverage by controlling the phase and amplitude of the reflected signals through nearly passive elements. This advantage is highly dependent on the availability of accurate channel state information (CSI), which is difficult to obtain, and even more so in realistic scenarios where the RIS phase variations are limited to a small number of discrete surfaces due to hardware limitations.Methods: To study this issue, we propose a new CSI estimation paradigm called recursive averaging, which extends the traditional least squares (LS) estimator but compensates for its weaknesses under low SNR and quantized phase regimes, known as (RALS). The new approach involves combining a recursive update scheme that sequentially improves the CSI estimates through recursive averaging and an adaptive feedback framework. This provides better robustness against noise and quantization-induced distortion, and allows for more precise RIS configuration under the hardware constraints of a non-ideal system. The aim is to reduce the channel estimation error and reduce the bit error rate (BER) by considering the practical implementation of the method. In addition, this study also investigates the effect of discrete phase.Results: We analyze the performance of RALS under idealized continuous-phase and discrete-phase scenarios, where the phase of each RIS element is quantized with a finite number of bits. Simulation results show that RALS outperforms traditional LS and other reference estimators measured in MSD and BER, especially in situations where the number of quantized bits is low or the SNR is poor.Conclusion: Simulation results show that the proposed method provides higher accuracy channel estimation with less estimation error. Integrating accurate channel estimation with efficient beamforming strategy, overall system performance is significantly enhanced. More specifically, it is shown through simulations that 4-bit resolution is sufficient for phase discretization considering real reflection phase constraints. Interestingly, the devised approach achieves such improved performance without engaging in huge computational complexity, thus being feasible to implement in real-time in RIS-based systems.
Original Research Paper
Cloud Computing
Gowri S; Jaganathan Rathi
Abstract
Background and Objectives: Cloud computing can play a vital role in promoting environmental sustainability by leveraging eco-friendly dedicated servers that adhere to green computing standards. The concept of "green cloud computing" revolves around harnessing cutting-edge technologies to minimize the ...
Read More
Background and Objectives: Cloud computing can play a vital role in promoting environmental sustainability by leveraging eco-friendly dedicated servers that adhere to green computing standards. The concept of "green cloud computing" revolves around harnessing cutting-edge technologies to minimize the environmental footprint of computing systems. One of the significant challenges in cloud-based systems is task scheduling, which must be optimized to enhance system efficiency, user experience, and environmental sustainability.Method: This paper proposes a novel Hybrid HEES (Hierarchical Energy-Efficient Scheduling) method that optimizes energy consumption and task scheduling in cloud computing environments. By combining genetic algorithm optimization, workflow-based scheduling, and energy-aware resource allocation, HEES achieves significant reductions in energy consumption and average task completion time.Results: The method is evaluated through simulations, demonstrating its effectiveness in optimizing energy efficiency and task scheduling performance. The Hybrid HEES method has the potential to reduce energy consumption, improve computing performance, and enhance sustainability in cloud computing environments.Conclusion: To evaluate a proposed HEES method through cloudsim 3.0 simulations, the numerical results confirm the effectiveness of HEES algorithm, which achieves average Energy consumption performance improvements of around 12% compared to GP and 8% compared to RR existing methods.
Original Research Paper
Image Annotation and Retrieval
Sajad Mohamadzadeh; Mohammad Gharehbagh
Abstract
Background and Objectives: Content-Based Image Retrieval (CBIR) systems are crucial for managing the exponential growth of digital imagery. Traditional methods relying on handcrafted features often fail to scale and capture semantic content. Although deep learning enhances retrieval quality, challenges ...
Read More
Background and Objectives: Content-Based Image Retrieval (CBIR) systems are crucial for managing the exponential growth of digital imagery. Traditional methods relying on handcrafted features often fail to scale and capture semantic content. Although deep learning enhances retrieval quality, challenges persist in computational complexity and efficiency. This paper introduces a hybrid CBIR framework that combines unsupervised deep feature learning, adaptive hashing, and VP-Tree-based hierarchical search optimization. The proposed system, evaluated on CIFAR-10, ImageNet subset, and a custom medical imaging dataset, achieves a mean average precision (mAP) of 96.1% and reduces retrieval latency by approximately 40% compared to conventional methods. By leveraging autoencoder-driven latent feature extraction and scalable metric space partitioning, our framework demonstrates superior performance in scalability, retrieval speed, and accuracy for large-scale applications.Methods: The proposed framework employs autoencoder-driven latent space encoding to extract compact yet semantically rich feature representations, ensuring robust discriminability across diverse image categories. To enhance retrieval efficiency, a hybrid search mechanism is implemented: a Euclidean-based nearest neighbor scheme O(N log N) is used for moderate-scale datasets, while a VP-Tree-based hashing scheme O(log N) is applied for large-scale retrieval scenarios. By leveraging hierarchical metric space partitioning, the method significantly reduces search complexity while maintaining retrieval accuracy.Results: Extensive evaluations show the proposed framework outperforms traditional and modern deep hashing techniques, achieving higher mean average precision, lower search latency, and better storage efficiency for both moderate and large-scale datasets. By integrating unsupervised representation learning, advanced hashing, and optimized search structures, the system surpasses conventional methods in speed and precision.Conclusion: This study presents a highly scalable and computationally efficient CBIR framework that addresses the limitations of existing methods by combining unsupervised deep feature learning, adaptive hashing, and hierarchical search structures. The results highlight the framework's ability to achieving high retrieval accuracy and efficiency, thus making it suitable for real-time applications in large-scale multimedia repositories.
Original Research Paper
Graph Clustering
Mohammad Asadpour; Shahin Pourbahrami
Abstract
Background and Objectives: One of the most important clustering methods is density-based clustering. This technique operates on the idea that clusters are regions of higher data density, separated by areas of lower density. Density Peak Clustering (DPC) is a modern density-based algorithm designed to ...
Read More
Background and Objectives: One of the most important clustering methods is density-based clustering. This technique operates on the idea that clusters are regions of higher data density, separated by areas of lower density. Density Peak Clustering (DPC) is a modern density-based algorithm designed to efficiently identify cluster centers by constructing a decision graph. In this graph, points with high local density and a large distance from other high-density points are selected as cluster centers. Once these centers are determined, the remaining non-central points are assigned to clusters based on their proximity to the nearest center. However, DPC performs poorly on manifold datasets with varying densities and is highly sensitive to the selection of the cut-off distance parameter. Methods: To address these limitations and improve clustering performance, this study introduces an approach that employs the radial distribution function to quantify the relationship between data points and high-density regions. This method enables the estimation of the probability of finding neighboring points around a central or dense point, and a histogram is generated to represent these relationships. Results: Unlike traditional DPC, the proposed method eliminates the need for a distance cut-off parameter. The approach was implemented using the natural neighbor algorithm and the radial distribution function in a MATLAB environment. Conclusion: Experimental results demonstrated significant improvements in clustering accuracy and reductions in execution time compared to existing methods.