Original Research Paper
Wireless Sensor Network
S. Ashraf; T. Ahmed; Z. Aslam; D. Muhammad; A. Yahya; M. Shuaeeb
Abstract
Background and Objectives: The quick response time and the coverage range are the crucial factors by which the quality service of a wireless sensor network can be acknowledged. In some cases, even networks possess sufficient available bandwidth but due to coverage tribulations, the customer ...
Read More
Background and Objectives: The quick response time and the coverage range are the crucial factors by which the quality service of a wireless sensor network can be acknowledged. In some cases, even networks possess sufficient available bandwidth but due to coverage tribulations, the customer satisfaction gets down suddenly. The increasing number of nodes directly is neither a canny solution to overcome the coverage problem nor a cost-effective. In fact, by changing the positions of the deployed node sagaciously can resolve the coverage issue and seems a cost-effective solution. Therefore, keeping all circumstances, a Depuration based Efficient Coverage Mechanism (DECM) has been developed. This algorithm suggests the new shifting positions for previously deployed sensor nodes to fill the coverage gap.Methods: It is a redeployment process and accomplished in two rounds. The first round avails the Dissimilitude Enhancement Scheme (DES), which searches the node to be shifted at new positions. The second round controls the unnecessary movement of the sensor nodes by the Depuration mechanism thereby the distance between previous and new positions is reduced. Results: The factors like loudness, pulse emission rate, maximum frequency, and sensing radius are meticulously explored during simulation rounds conducted by MATLAB. The performance of DECM has been compared with superlative algorithms i.e., Fruit Fly Optimization Algorithm (FOA), Particle Swarm Optimization (PSO), and Ant Colony Optimization (ACO) in terms of mean coverage range, computation time, standard deviation, and network energy diminution.Conclusion: According to the simulation results, the DECM has achieved more than 98% coverage range, with a trivial computation time of nearly 0.016 seconds as compared to FOA, PSO, and ACO.
Original Research Paper
Data Preprocessing
F. Tabib Mahmoudi; A. Karami
Abstract
Background and Objectives: Pan-sharpening algorithms integrate the spectral capabilities of the multispectral imagery with the spatial details of the panchromatic one to obtain a product with confident spectral and spatial resolutions. Due to the large diversities in the utilized pan-sharpening algorithms, ...
Read More
Background and Objectives: Pan-sharpening algorithms integrate the spectral capabilities of the multispectral imagery with the spatial details of the panchromatic one to obtain a product with confident spectral and spatial resolutions. Due to the large diversities in the utilized pan-sharpening algorithms, occurring spatial and spectral deviations in their results should be recognized by performing the quantitative assessment analysis.Methods: In this research, the pan-sharpened images from PCA, IHS, and Gram-Schmidt transformation based algorithms are evaluated for the multi-spectral and panchromatic images fusion of Landsat-8 OLI sensor (medium scale resolution satellite) and WorldView-2 (high-resolution satellite). Quantitative analysis is performed on the pan-sharpened products based on the Per-Pixel Deviation (PPD) measure for spectral deviation analysis and high-pass filter and edge extraction measures for analyzing the spatial correlations. Moreover, entropy and standard deviation quantitative evaluation measures are also utilized based on the pan-sharpened image content.Results: Quantitative analysis represents that increasing the spatial resolution of the utilized remote sensing data has direct impacts on the spectral, spatial, and content-based characteristics of the generated Pan-sharpened products. Gram-Schmidt transformation based pan-sharpening method has the least spectral deviations in both WorldView-2 and Landsat-8 satellite images. But, the amount of spectral, spatial and content-based quantitative measures of PCA and IHS are changing with various spatial resolutions.Conclusion: it can be said that Gram-Schmidt pan-sharpening method has the best performance in both medium-scale and high-resolution data sets based on the spectral, spatial, and content quantitative evaluation results. The IHS pan-sharpening method has better performance than the PCA method in Landsat-8 OLI data. But, by increasing the spatial resolution of the data, PCA generates pan-sharpened products with better spectral, spatial, and content based quantitative evaluation results.
Original Research Paper
Cloud Computing
H. Jahanpour; H. Barati; A. Mehranzadeh
Abstract
Background and Objectives: Cloud Computing has brought a new dimension to the IT world. The technology of cloud computing allows employing a large number of Virtual Machines to run intensive applications. Each failure in running applications fails system operations. To solve the problem, it is required ...
Read More
Background and Objectives: Cloud Computing has brought a new dimension to the IT world. The technology of cloud computing allows employing a large number of Virtual Machines to run intensive applications. Each failure in running applications fails system operations. To solve the problem, it is required to restart the systems.Methods: In this paper, to predict and avoid failure in HPC systems, a method of fault tolerance to High-Performance Computing systems (HPC) in the cloud is called Daemon-COA-MMT (DCM), has been proposed. In the proposed method, the Daemon Fault Tolerance technique has been enhanced, and COA-MMT has been utilized for load balancing. The method consists of four modules, which are used to determine the host state. When the system is in the alarm state, the current host may face failure. Then the most optimal host for migration is selected, and process-level migration is performed. The method causes decreased migration overheads, decreased system performance speed, optimal use of underutilized hosts instead of leasing new hosts, appropriate load balancing, equal use of hardware resources by all hosts, focusing on QoS and SLA, and the significant decrease of energy consumption.Results: The simulation results revealed that in terms of parameters, the proposed method declines average job makespan, average response time, and average task execution cost by 18.06%, 35.68%, and 24.6%, respectively. The proposed fault tolerance algorithm has improved energy consumption by 30% and decreased the HPC systems' failure rate.Conclusion: In this study, the Daemon Fault Tolerance technique has been enhanced, and COA-MMT has been utilized for load balancing in high performance computing in the cloud computing.
Original Research Paper
Data Mining
R. Asgarnezhad; A. Monadjemi; M. SoltanAghaei
Abstract
Background and Objectives: With the extensive web applications, review sentiment classification has attracted increasing interest among text mining works. Traditional approaches did not indicate multiple relationships connecting words while emphasizing the preprocessing phase and data reduction techniques, ...
Read More
Background and Objectives: With the extensive web applications, review sentiment classification has attracted increasing interest among text mining works. Traditional approaches did not indicate multiple relationships connecting words while emphasizing the preprocessing phase and data reduction techniques, making a huge performance difference in classification. Methods: This study suggests a model as an efficient model for sentiment classification combining preprocessing techniques, sampling methods, feature selection methods, and ensemble supervised classification to increase the classification performance. In the feature selection phase of the proposed model, we applied n-grams, which is a computational method, to optimize the feature selection procedure by extracting features based on the relationships of the words. Then, the best-selected feature through the particle swarm optimization algorithm to optimize the feature selection procedure by iteratively trying to improve feature selection. Results: In the experimental study, a comprehensive range of comparative experiments conducted to assess the effectiveness of the proposed model using the best in the literature on Twitter datasets. The highest performance of the proposed model obtains 97.33, 92.61, 97.16, and 96.23% in terms of precision, accuracy, recall, and f-measure, respectively.Conclusion: The proposed model classifies the sentiment of tweets and online reviews through ensemble methods. Besides, two sampling techniques had applied in the preprocessing phase. The results confirmed the superiority of the proposed model over state-of-the-art systems.
Original Research Paper
Classification
M. Mirhosseini; M. Fazlali
Abstract
Background and Objectives: -similarity problem defined as measuring the similarity among objects and finding a group of objects from a dataset that have the most similarity to each other. This problem has been become an important issue in information retrieval and data mining. Theory of this ...
Read More
Background and Objectives: -similarity problem defined as measuring the similarity among objects and finding a group of objects from a dataset that have the most similarity to each other. This problem has been become an important issue in information retrieval and data mining. Theory of this concept is mathematically proven, but it practically has high memory complexity and is so time consuming. Besides, the solutions found by metaheuristics are not exact.Methods: This paper is conducted to propose an exact method to solve -similarity problem reducing the memory complexity and decreasing the execution time by parallelism using Open-MP. The experiments are performed on the application of text document resemblance.Results: It has been shown that the memory complexity of the proposed method is decreased to , and the experimental results show that this method accelerates the speed of the computations about 5 times.Conclusion: The simulated results of the proposed method display a good improvement in speed, the used memory space, and scalability compared with the previous exact method.
Original Research Paper
Power
Z. Dehghani Arani; S. A. Taher; M. H. Karimi; M. Rahimi
Abstract
Background and Objectives: The wind turbines (WTs) with doubly fed induction generator (DFIG) have active and reactive power as well as electromagnetic torque oscillations, rotor over-current and DC-link over-voltage problems under grid faults. Solutions for these problems presented in articles can be ...
Read More
Background and Objectives: The wind turbines (WTs) with doubly fed induction generator (DFIG) have active and reactive power as well as electromagnetic torque oscillations, rotor over-current and DC-link over-voltage problems under grid faults. Solutions for these problems presented in articles can be classified into three categories: hardware protection devices, software methods, and combination of hardware and software techniques.Methods: Conventional protection devices used for fault ride through (FRT) capability improvement of grid-connected DFIG-based WTs impose difficulty in rotor side converter (RSC) controlling, causing failure to comply with grid code requirements. Hence, the main idea in this paper is to develop a novel coordinated model predictive control (MPC) for the power converters without need to use any auxiliary hardware. Control objectives are defined to maintain DC-link voltage, rotor current as well as electromagnetic torque within permissible limits under grid fault conditions by choosing the best switching state so as to meet and exceed FRT requirements. Model predictive current and electromagnetic torque control schemes are implemented in the RSC. Also, model predictive current and DC-link voltage control schemes are applied to grid side converter (GSC).Results: To validate the proposed control method, simulation studies are compared to conventional proportional-plus-integral (PI) controllers and sliding mode control (SMC) with pulse-width modulation (PWM) switching algorithm. In different case studies comprising variable wind speeds, single-phase fault, DFIG parameters variations, and severe voltage dip, the rotor current and DC-link voltage are respectively restricted to 2 pu and 1.2 times of DC-link rated voltage by the proposed MPC-based approach. The maximum peak values of DC-link voltage are 1783, 1463 and 1190 V by using PI control, SMC and the proposed methods, respectively. The maximum peak values of rotor current obtained by PI control, SMC and the proposed strategies are 3.23, 3.3 and 1.95 pu, respectively. Also, PI control, SMC and the proposed MPC methods present 0.8, 0.4 and 0.14 pu, respectively as the maximum peak values of electromagnetic torque.Conclusion: The proposed control schemes are able to effectively improve the FRT capability of grid-connected DFIG-based WTs and keep the values of DC-link voltage, rotor current and electromagnetic torque within the acceptable limits. Moreover, these schemes present fast dynamic behavior during grid fault conditions due to modulator-free capability of the MPC method.
Original Research Paper
Data Mining
Y. Rohani; Z. Torabi; S. Kianian
Abstract
Background: Prediction of students' academic performance is essential for systems emphasizing students' greater success. The results can largely lead to increase in the quality of the educating and learning. Through the application of data mining, useful and innovative patterns can be extracted from ...
Read More
Background: Prediction of students' academic performance is essential for systems emphasizing students' greater success. The results can largely lead to increase in the quality of the educating and learning. Through the application of data mining, useful and innovative patterns can be extracted from the educational data.Methods: In this paper, a new metaheuristic algorithm, combination of simulated annealing and genetic algorithms, is proposed for predicting students’ academic performance in educational data mining. Although metaheuristic algorithms are one of the best options for discovering the hidden relationships between data in data science, they do not separately perform well in accurate prediction of students’ academic performance. Therefore, the proposed method integrates the advantages of both genetic and simulated annealing algorithms. The genetic algorithm is applied to explore new solutions, while simulated annealing is used to increase the exploitation power. By using this combination, the proposed algorithm has been able to predict the students’ academic performance with high accuracy.Results: The efficiency of the proposed algorithm is evaluated on five different educational data sets, including two data sets of students of Shahid Rajaee University of Tehran and three online educational data sets. Our experimental results show and accuracy improvement of the proposed algorithm in comparison to the four similar metaheuristic and five popular classification methods respectively.
Original Research Paper
Energy Hub
H. Hosseinnejad; S. Galvani; P. Alemi
Abstract
Background and Objectives: Different energy demand calls the need for utilizing Energy Hub Systems (EHS), but the economic dispatch issue has become complicated due to uncertainty in demand. So, scenario generation and reduction techniques are used to considering the uncertainty of the EH demand. Dependent ...
Read More
Background and Objectives: Different energy demand calls the need for utilizing Energy Hub Systems (EHS), but the economic dispatch issue has become complicated due to uncertainty in demand. So, scenario generation and reduction techniques are used to considering the uncertainty of the EH demand. Dependent on the amount of fuel used, each system has various generation costs. Configuration selection stands as a challenging dilemma in the EHS designing besides economic problems. In this paper, the optimal EHS operation along with configuration issue is tackled.Methods: To do so, two EHS types are investigated to evaluate the configuration effect besides energy prices simultaneously change. Typically, the effect of the Demand Response (DR) feature is rarely considered in EHSs management which considered in this paper. Also, Metaheuristic Automatic Data Clustering (MADC) is used to reduce the decision-making problem dimension instead of using human decision makers in the subject of cluster center numbers and considering uncertainty. The "Shannon's Entropy" and the "TOPSIS" methods are also used in the decision-making. The study is carried out in MATLAB© and GAMS©.Results: In addition to minimizing the computational burden, the proposed EHS not only serves an enhancement in benefit by reducing the cost but also provides a semi-flat load curve in peak period by employing Emergency Demand Response Program (EDRP) and Time of Use (TOU).Conclusion: The results show that significant computational burden reduction is possible in the field of demand data by using automatic clustering method without human interference. In addition to the proposed configuration's results betterment, the approach demonstrated EH's configuration effect could consider as important as other features in the presence of DRPs for reaching desires of EHs customers which rarely considered. Also, "Shannon's Entropy" and the "TOPSIS" methods integration could select the best DRP scenario without human interference. The results of this study are encouraging and warrant further analysis and researches.
Original Research Paper
R. Salmani; A. Bijari; S. H. Zahiri
Abstract
Background and Objectives: Due to the rapid development in wireless communications, bandpass filters have become key components in modern communication systems. Among the microwave filter technologies, planar structures of microstrip line are chosen, due to low profile, weight, ease of fabrication, and ...
Read More
Background and Objectives: Due to the rapid development in wireless communications, bandpass filters have become key components in modern communication systems. Among the microwave filter technologies, planar structures of microstrip line are chosen, due to low profile, weight, ease of fabrication, and manufacturing cost.Methods: This paper designs and simulates a new microstrip dual-band bandpass filter. In the proposed structure, three coupled lines and a loaded asymmetric two coupled line are used. The design method is based on introducing and generating the transmission zeros in the frequency response of a wideband single-band filter. A wideband frequency response is obtained using the three coupled lines, and the transmission zeros are achieved using the novel loaded asymmetric two coupled lines.Results: The proposed dual-band filter is designed and simulated on a Rogers RO3210 substrate for WLAN applications. Dimension of the proposed filter is 11.22 mm × 13.04 mm. The electromagnetic (EM) simulation is carried out by Momentum EM (ADS) software. Simulation results show that the proposed dual-band bandpass filter has two pass-bands at 2.4 GHz and 5.15 GHz with a loss of less than 1 dB for two pass-bands.Conclusion: Among the advantages of this filter, low loss, small size, and high attenuation between the two pass-bands can be mentioned.
Original Research Paper
Artificial Intelligence
M. Yousefi; R. Akbari; S. M. R. Moosavi
Abstract
Background and Objectives: It is generally accepted that the highest cost in software development is associated with the software maintenance phase. In corrective maintenance, the main task is correcting the bugs found by the users. These bugs are submitted by the users to a Bug Tracking System (BTS). ...
Read More
Background and Objectives: It is generally accepted that the highest cost in software development is associated with the software maintenance phase. In corrective maintenance, the main task is correcting the bugs found by the users. These bugs are submitted by the users to a Bug Tracking System (BTS). The bugs are evaluated by the bug triager and assigned to the developers to correct them. To find a related developer to correct the bug, recent developers’ activities and previous bug fixes must be examined. This paper presents an automated method to assign bugs to developers by identifying similarity between new bugs and previously reported bug reports.Methods: For automatic bug assignment, four clustering techniques (i.e. Expectation-Maximization (EM), Farthest First, Hierarchical Clustering, and Simple Kmeans) are used where a tag is created for each cluster that indicates an associated developer for bug correction. To evaluate the quality of the proposed methods, the clusters generated by the methods are compared with the labels suggested by an expert triager.Results: To evaluate the performance of the proposed method, we use real-world data of a large scale web-based system which is stored in the BTS of a software company. To select the appropriate algorithm for the clustering, the outputs of each clustering algorithm are compared to the labels suggested by the expert triager. The algorithm with closer output to the expert opinion is selected as the best algorithm. The results showed that EM and FarthestFirst clustering algorithms with 3% similarity error have the most similarity with the expert opinion.Conclusion: the results obtained by the algorithms show that we can successfully apply them for bug assignment in real-world software development environments.
Original Research Paper
Artificial Intelligence
S. Kianian; S. Farzi; H. Samak
Abstract
Background and Objectives: Simplicity and flexibility constitute the two basic features for graph models which has made them functional models for real life problems. The attributive graphs are too popular among researchers because of their efficiency and functionality. An attributive graph is a graph ...
Read More
Background and Objectives: Simplicity and flexibility constitute the two basic features for graph models which has made them functional models for real life problems. The attributive graphs are too popular among researchers because of their efficiency and functionality. An attributive graph is a graph the nodes and edges of which can be attributive. Nodes and edges as structural dimension and their attributes as contextual dimension made graphs more flexible in modeling real problems.Methods: In this study, a new clustering algorithm is proposed based on K-Medoid which focuses on graph’s structure dimension, through heat diffusion algorithm and contextual dimension through weighted Jaccard coefficient in a simultaneous matter. The calculated clusters through proposed algorithm are of denser and nodes with more similar attributes.Results: DBLP and PBLOG real data sets are applied to evaluate and compare this algorithm with new and well-known cluster algorithms.Conclusion: Results indicate the outperformers of this algorithm in relation to its counterparts as to structure quality, cluster contextual and time complexity criteria.
Original Research Paper
Sliding Mode Control
H. Zahedi; G. Arab Markadeh; S. Taghipour
Abstract
Background and Objectives: Cascaded doubly fed induction generators (CDFIGs) can directly connected to isolated load or power grid without any brushes which are needed in conventional DFIGs. Output control targets before grid connection of CDFIGs are voltage and frequency control and after that are active ...
Read More
Background and Objectives: Cascaded doubly fed induction generators (CDFIGs) can directly connected to isolated load or power grid without any brushes which are needed in conventional DFIGs. Output control targets before grid connection of CDFIGs are voltage and frequency control and after that are active and reactive power control. In control aspect, output control of CDFIG is a multi-input multi-output (MIMO) subject. In this paper, Relative Gain Array (RGA) methodology, as a MIMO interaction index, is used to show the degree of relevance between the control inputs and output targets, in both voltage control mode (before grid connection) and active-reactive power control mode (after grid connection). Based on RGA results, conventional PI controllers cannot be used to decouple control of generator outputs in grid connected mode. So, a powerful method based on sliding mode approach is proposed to generate the proper control voltages for output control of CDFIG in both islanded and grid connected mode. Simulation and experimental results using Matlab and TMS320F28335 based prototype of CDFIG are provided to demonstrate the effectiveness and robustness of the proposed method.Methods: A mathematical method based on RGA matrix is used to evaluate the amount of interactions between output targets and input control variables in CDFIGs in islanded and grid connected mode.Results: Conventional PI controller is a proper method to control the output voltage of Power Machine (PM) in CDFIG but is not a suitable technique for active and reactive power control in grid-tied mode.Conclusion: Sliding mode control can be used to decouple control of CDFIGs in both before and after grid connection. As well as, robustness against the wind speed variation and parameters uncertainties is proved via both simulation and experimental tests.