Original Research Paper
Network Security
S. Goli-Bidgoli; M. SofarAli
Abstract
Background and Objectives: Vehicular Ad-Hoc Networks can enhance road safety and enable drivers to avoid different threats. Safety applications, mobile commerce, and other information services are among different available services that are affected by dynamic topology, vehicle’s speed and node ...
Read More
Background and Objectives: Vehicular Ad-Hoc Networks can enhance road safety and enable drivers to avoid different threats. Safety applications, mobile commerce, and other information services are among different available services that are affected by dynamic topology, vehicle’s speed and node misbehaving. Dynamic topology makes the route unstable and unreliable. So, improving the throughput and performance of VANET through reliable and stable routes with low overhead are among the important goals in this context. Methods: Verifying all issues related to the reliable routing, different effective internal, external and environmental factors on route reliability are led to a new security framework in this paper. Black-hole attack and its effects, as the most well-known attack in wireless networks, along with presenting a secure routing protocol are other achievements of this paper. The proposed protocol uses a trust management system to detect and neutralize this type of attack. Results: Simulation results show that the presented trust-based framework can increase the reliability of the networks by decreasing the effect of the malicious nodes in the routing process. Conclusion: Our simulation results show that the proposed protocol can overcome the effects of black-hole attackers and it can increase throughput by 93% and packet received rate by 94.14% compared to the original AODV. Investigating the effect of the other attacks, simulating in an urban area with repetitive communications and considering the RSU in verifying the trustworthiness of entities are suggested for our future works.======================================================================================================Copyrights©2018 The author(s). This is an open access article distributed under the terms of the Creative Commons Attribution (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, as long as the original authors and source are cited. No permission is required from the authors or the publishers.======================================================================================================
Review paper
Control
S. Shams Shamsabad Farahani
Abstract
Background and Objectives: Wireless Sensor Networks (WSNs) are a specific category of wireless ad-hoc networks where their performance is highly affected by application, life time, storage capacity, processing power, topology changes, and the communication medium and bandwidth. These limitations necessitate ...
Read More
Background and Objectives: Wireless Sensor Networks (WSNs) are a specific category of wireless ad-hoc networks where their performance is highly affected by application, life time, storage capacity, processing power, topology changes, and the communication medium and bandwidth. These limitations necessitate an effective data transport control in WSNs considering quality of service, energy efficiency, and congestion control. Methods: Congestion is an important issue in wireless networks. Congestion in WSNs badly effects loss rate, channel quality, link utilization, the number of retransmissions, traffic flow, network life time, delay, and energy as well as throughput. Due to the dominant role of WSNs, more efficient congestion control algorithms are needed. Results: In this paper, a comprehensive review of different congestion control schemes in WSNs is provided. In particular, different congestion control techniques are classified according to the way congestion is detected, notified and mitigated. Furthermore, congestion mitigation algorithms are classified and different performance metrics are used to compare congestion control algorithms.Conclusion: In this paper, congestion mitigation algorithms are classified in different groups. Finally, the current work attempts to provide specific directives to design and develop novel congestion control schemes.======================================================================================================Copyrights©2018 The author(s). This is an open access article distributed under the terms of the Creative Commons Attribution (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, as long as the original authors and source are cited. No permission is required from the authors or the publishers.======================================================================================================
Original Research Paper
O. Abbas; M.R. Arvan; A. Mahmoudi
Abstract
Background and Objectives: The accuracy of target position detection in IR seeker depends on the accuracy of tracking error signal (TES) extraction from seeker Field of View (FOV). The type of reticle inside the seeker determines the output modulation signal that carries the TES. In this paper, the stationary ...
Read More
Background and Objectives: The accuracy of target position detection in IR seeker depends on the accuracy of tracking error signal (TES) extraction from seeker Field of View (FOV). The type of reticle inside the seeker determines the output modulation signal that carries the TES. In this paper, the stationary wagon wheel reticle is used, which makes the type of the output signal as FM modulation in the linear region of FOV, but the signal will be distorted by changing the radius of target image spot (TIS) and in the nonlinear region of FOV. Methods: Firstly, we applied the Hilbert transform algorithm for the first time in this field and compared it with the conventional algorithm in the linear region of FOV to decrease the effect of changing the radius of TIS. Secondly, we presented a new method in the nonlinear region to extract the TES. Results: The results show improvement in TES accuracy extraction in the linear and nonlinear region over the FOV for different radii of TIS. Conclusion: Improving the TES extraction in all FOV will improve the ability of the missile to track the target. This extraction faces problems like existence the target in the nonlinear region of FOV and changing the radius of the TIS.======================================================================================================Copyrights©2018 The author(s). This is an open access article distributed under the terms of the Creative Commons Attribution (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, as long as the original authors and source are cited. No permission is required from the authors or the publishers.======================================================================================================
Innovative Paper
Communications Networks
M. Ghaderi; V. Tabataba Vakili; M. Sheikhan
Abstract
Background and Objectives: Routing and data aggregation are two important techniques for reducing communication cost of wireless sensor networks (WSNs). To minimize communication cost, routing methods can be merged with data aggregation techniques. Compressive sensing (CS) is one of the effective techniques ...
Read More
Background and Objectives: Routing and data aggregation are two important techniques for reducing communication cost of wireless sensor networks (WSNs). To minimize communication cost, routing methods can be merged with data aggregation techniques. Compressive sensing (CS) is one of the effective techniques for aggregating network data, which can reduce the cost of communication by reducing the amount of routed data to the sink. Spatiotemporal CS (STCS), with the use of spatial and temporal correlation of sensor readings, can increase the compression rate in WSNs, thereby reducing the cost of communication. Methods: In this paper, a new method of STCS technique based on the geographic adaptive fidelity (GAF) protocol is proposed which can effectively reduce the communication cost and energy consumption in WSNs. In the proposed method, temporal data is obtained from random selection of temporal readings of cluster head (CH) sensors located in virtual cells in the clustered sensors' area and spatial data will be formed from the data readings of CHs located on the routes. Accordingly, a new structure of sensing matrix will be created. Results: The results of proposed method show that the proposed method as compared to the method proposed in [29], which is the most similar method in the literature, reduces energy consumption in the range of 22% to 43% in various scenarios which were implemented based on the number of required measurements at the sink (M) and the number of measurements in the routes (mr). Conclusion: In the proposed method, based on spatio-temporal CS (STCS), a new structure of sensing matrix is created that can increase the compression rate, thereby reducing the communication cost in the WSNs.======================================================================================================Copyrights©2018 The author(s). This is an open access article distributed under the terms of the Creative Commons Attribution (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, as long as the original authors and source are cited. No permission is required from the authors or the publishers.======================================================================================================
Original Research Paper
Optoelectronics and Photonics
M. Shaveisi; A. Rezaei
Abstract
Background and Objectives: This study presents the importance of reversible logic in designing of high performance and low power consumption digital circuits. In our research, the various forms of sequential reversible circuits such as D, T, SR and JK flip-flops are investigated based on carbon nanotube ...
Read More
Background and Objectives: This study presents the importance of reversible logic in designing of high performance and low power consumption digital circuits. In our research, the various forms of sequential reversible circuits such as D, T, SR and JK flip-flops are investigated based on carbon nanotube field-effect transistors. Methods: By simultaneous using of the reversible logic gates and carbon nanotube transistors in implementation of various flip-flops and introducing suitable transistor circuits of conventional reversible gates, all reversible flip-flops are simulated in two voltages, 0.3 and 0.5 Volt. The Hspice_H-2013.03-SP2 software is used to simulate these circuits using the 32nm CNTFET technology (the standard Stanford spice model). Results: The simulation results indicate a significant reduction in the average power consumption of D, T, SR and JK flip-flops, respectively about 99.98%, 82.79%, 60.46%, and 81.53%. Conclusion: Our results show that the proposed structures have achieved a high performance in terms of average power consumption and PDP.======================================================================================================Copyrights©2018 The author(s). This is an open access article distributed under the terms of the Creative Commons Attribution (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, as long as the original authors and source are cited. No permission is required from the authors or the publishers.======================================================================================================
Original Research Paper
Network Security
M. Eslamnezhad Namin; M. Hosseinzadeh; N. Bagheri; A. Khademzadeh
Abstract
Background and Objectives: Search protocols are among the main applications of RFID systems. Since a search protocol should be able to locate a certain tag among many tags, not only it should be secure against RFID threats but also it should be affordable. Methods: In this article, an RFID-based search ...
Read More
Background and Objectives: Search protocols are among the main applications of RFID systems. Since a search protocol should be able to locate a certain tag among many tags, not only it should be secure against RFID threats but also it should be affordable. Methods: In this article, an RFID-based search protocol will be presented. We use an encryption technique that is referred to as authenticated encryption in order to boost the security level, which can provide confidentiality and integrity, simultaneously. Results: Furthermore, since the proposed protocol belongs to the lightweight protocols category, it is appropriate for applications that require many tags and costs must be low. In terms of the security, the analysis results give a satisfactory security level and it is robust against different RFID threats like replay, traceability and impersonation attacks. Using Ouafi-Phan model, BAN and AVISPA, we also checked the security correctness of the suggested protocol. Conclusion: In this paper, we presented a scalable lightweight RFID search protocol. We employed an encryption technique called Authenticated Encryption (A.E.) to improve the security level of the suggested protocol.======================================================================================================Copyrights©2018 The author(s). This is an open access article distributed under the terms of the Creative Commons Attribution (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, as long as the original authors and source are cited. No permission is required from the authors or the publishers.======================================================================================================
Original Research Paper
Computational Intelligence
N. Sayyadi Shahraki; S.H. Zahiri
Abstract
Background and Objectives: Today, the use of methods derived from Reinforcement learning-based approaches, due to their powerful in learning and extracting optimal/desirable solutions to various problems, shows a significant wideness and success. This paper presents the application of reinforcement learning ...
Read More
Background and Objectives: Today, the use of methods derived from Reinforcement learning-based approaches, due to their powerful in learning and extracting optimal/desirable solutions to various problems, shows a significant wideness and success. This paper presents the application of reinforcement learning in automatic analog integrated circuit design. Methods: In this work, the multi-objective approach by learning automata is evaluated for accommodating required functionalities and performance specifications considering optimal minimizing the MOSFETs area and power consumption for two famous CMOS op-amps. Results: The performance of the circuits is evaluated through HSPICE and the approach is implemented in MATLAB, so a combination of MATLAB and HSPICE is performed. The two-stage and single-ended folded-cascode op-amps are designed in 0.25μm and 0.18μm CMOS technologies, respectively. According to the simulation results, a power of 560.42 and an area of 72.825 are obtained for a two-stage CMOS op-amp, and also a power of 214.15 and an area of 13.76 are obtained for a single-ended folded-cascode op-amp. In addition, in terms of total optimality index, MOLA for both cases has the best performance between the applied methods, and other research works with values of -25.683 and -34.162 dB, respectively. Conclusion: The results shown the ability of the proposed method to optimize aforementioned objectives, compared with three multi-objective well-known algorithms.======================================================================================================Copyrights©2018 The author(s). This is an open access article distributed under the terms of the Creative Commons Attribution (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, as long as the original authors and source are cited. No permission is required from the authors or the publishers.======================================================================================================
Original Research Paper
Electronics
P. Halvaee; M.S. Beigi
Abstract
Background and Objectives: In this work, porous nanoparticles of ferrite cobalt were prepared by dissolving CoCl2.6H2O and FeCl3 in ethylene glycol in a hydrothermal process. Using ethylene glycol instead of DI water as a solvent would cause to provide porous structure of ferrite cobalt. Methods: In ...
Read More
Background and Objectives: In this work, porous nanoparticles of ferrite cobalt were prepared by dissolving CoCl2.6H2O and FeCl3 in ethylene glycol in a hydrothermal process. Using ethylene glycol instead of DI water as a solvent would cause to provide porous structure of ferrite cobalt. Methods: In the present paper, 0.05 ml of colloidal fluid of fabricated nanostructure was injected on interdigitated electrodes (IDE) on a printed circuit board (PCB) substrate by a drop casting process. Morphological and structural characterizations of structure were investigated by X-ray diffraction and scanning electron microscopy and the obtained results of analyses show the porous nanostructure of the material. Results: Sensor's performance in detection of gas vapors was evaluated in different temperatures which has the best response (20.38% for 100ppm methanol vapors) for methanol vapors at room temperature. performance of sensor in selection of methanol vapors, chemical stability and repeatability of that, makes it useful to profit it in different fields and industries. Conclusion: Porous nanoparticles of CoFe2O4 were prepared by a hydrothermal process. By benefiting of XRD analysis and SEM images, porosity of nanostructure was approved. Response of sensor in different temperatures was measured. At room temperature, it has the best response of 21.38% for 100 ppm methanol vapors. Room temperature working of sensor causes reducing in power consumption and decreasing risks of working in high temperatures. This sensor has a good selectivity to methanol vapors in presence of ethanol, acetone, methane and LPG vapors. Repeatability and chemical stability of sensor in long times of working were approved.======================================================================================================Copyrights©2018 The author(s). This is an open access article distributed under the terms of the Creative Commons Attribution (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, as long as the original authors and source are cited. No permission is required from the authors or the publishers.======================================================================================================
Original Research Paper
Power
M. Nikzad; A. Samimi
Abstract
Background and Objectives: Suitable scheming as well as appropriate pricing of demand response (DR) programs are two important issues being encountered by system operators. Assigning proper values could have effects on creating more incentives and raising customers’ participation level as well ...
Read More
Background and Objectives: Suitable scheming as well as appropriate pricing of demand response (DR) programs are two important issues being encountered by system operators. Assigning proper values could have effects on creating more incentives and raising customers’ participation level as well as improving technical and economical characteristics of the power system. Here, time of use (TOU) as an important scheme of DR is linearly introduced based on the concepts of self and cross price elasticity indices of load demand. Methods: In order to construct an effective TOU program, a combined optimization model over the operation cost and customers’ benefit is proposed based on the security-constrained unit commitment (SCUC) problem. Supplementary constraints are provided at each load point with 24-hour energy consumption requirement along with DR limitations. Results: IEEE 24-bus test system has been employed to investigate the different features of the presented method. By varying DR potential in the system, TOU rates are determined and then their impacts on the customers' electricity bill, operation cost, and reserve cost as well as load profile of the system are analyzed. In addition, the effect of network congestion as a technical limitation is studied. The obtained results demonstrate the effectiveness and applicability of the proposed method. Conclusion: The simulation results demonstrate that the TOU rates leads to financial profit for all customers, reduction of peak load as well as the operation cost while 24-hour energy consumptions of customers at load buses have been fulfilled. Furthermore, the operation cost decreases gradually by attaining more flat load profile. In addition, the effect of lines congestion on the proposed method has been investigated and it has been shown that lines congestion leads to profit reduction of customers at load points connected to the congested lines.======================================================================================================Copyrights©2018 The author(s). This is an open access article distributed under the terms of the Creative Commons Attribution (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, as long as the original authors and source are cited. No permission is required from the authors or the publishers.======================================================================================================
Original Research Paper
Software
A. Nourollah; N. Behzadpour
Abstract
Background and Objectives: This paper presents a new optimization problem in the field of linkage reconfiguration. This is the problem of minimizing moving parts of a given robot arm for positioning the end effector of the given robot arm at the given target point as well as minimizing the movement of ...
Read More
Background and Objectives: This paper presents a new optimization problem in the field of linkage reconfiguration. This is the problem of minimizing moving parts of a given robot arm for positioning the end effector of the given robot arm at the given target point as well as minimizing the movement of the movable parts. Methods: Initially, formal modeling is accomplished by minimizing the movement problem. At this time, a criterion called AM (Arithmetic Measure) is introduced, and this criterion is used to quantify the motion of the linkage. Afterward, it is indicated that the presented problem is an NP-Hard problem. Consequently, a greedy heuristic algorithm is presented to minimize the movement of the robot's moving components. After identifying the moving components and the movement of these parts, an algorithm is provided to determine the final configuration of the robot arm. Results: The results indicate that the discussed model successfully reduced the moving parts of the robot arm. Moreover, the results show that the proposed approach fulfills the goal of minimization of the linkage components. Furthermore, this method leads to erosion of arm, reduces energy consumption and the required parameters and variables for calculating the final configuration of the linkages. Conclusion: The mentioned algorithm solves the problem by mapping the robot arm with an arbitrary number of links to a robot with a single link or two links. The proposed heuristic approach requires O(n2) time using O(n) space.======================================================================================================Copyrights©2018 The author(s). This is an open access article distributed under the terms of the Creative Commons Attribution (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, as long as the original authors and source are cited. No permission is required from the authors or the publishers.======================================================================================================
Original Research Paper
Data Mining
I. Behravan; S.H. Zahiri; S.M. Razavi; R. Trasarti
Abstract
Background and Objectives: Big data referred to huge datasets with high number of objects and high number of dimensions. Mining and extracting big datasets is beyond the capability of conventional data mining algorithms including clustering algorithms, classification algorithms, feature selection methods ...
Read More
Background and Objectives: Big data referred to huge datasets with high number of objects and high number of dimensions. Mining and extracting big datasets is beyond the capability of conventional data mining algorithms including clustering algorithms, classification algorithms, feature selection methods and etc. Methods: Clustering, which is the process of dividing the data points of a dataset into different groups (clusters) based on their similarities and dissimilarities, is an unsupervised learning method which discovers useful information and hidden patterns from raw data. In this research a new clustering method for big datasets is introduced based on Particle Swarm Optimization (PSO) algorithm. The proposed method is a two-stage algorithm which first searches the solution space for proper number of clusters and then searches to find the position of the centroids. Results: the performance of the proposed method is evaluated on 13 synthetic datasets. Also its performance is compared to X-means through calculating two evaluation metrics: Rand index and NMI index. The results demonstrate the superiority of the proposed method over X-means for all of the synthetic datasets. Furthermore, a biological microarray dataset is used to evaluate the proposed method deeper. Finally, 2 real big mobility datasets, including the trajectories traveled by several cars in the city of Pisa, are analyzed using the proposed clustering method. The first dataset includes the trajectories recorded in Sunday and the second one contains the trajectories recorded in Monday during 5 weeks. The achieved results showed that people choose more diverse destinations in Sunday although it has fewer trajectories. Conclusion: Finding the number of clusters is a big challenge especially fir big datasets. The results achieved for the proposed method showed its fabulous performance in detecting the number of clusters for high dimensional and massive datasets. Also, the results demonstrate the power and effectiveness of the swarm intelligence methods in solving hard and complex optimization problems.======================================================================================================Copyrights©2018 The author(s). This is an open access article distributed under the terms of the Creative Commons Attribution (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, as long as the original authors and source are cited. No permission is required from the authors or the publishers.======================================================================================================
Original Research Paper
Electronics
J. Khosravi; Mohammad Shams Esfand Abadi; R. Ebrahimpour
Abstract
Background and Objectives: There are numerous applications for image registration (IR). The main purpose of the IR is to find a map between two different situation images. In this way, the main objective is to find this map to reconstruct the target image as optimum as possible. Methods: Needless to ...
Read More
Background and Objectives: There are numerous applications for image registration (IR). The main purpose of the IR is to find a map between two different situation images. In this way, the main objective is to find this map to reconstruct the target image as optimum as possible. Methods: Needless to say, the IR task is an optimization problem. As the optimization method, although the evolutionary ones are sometimes more effective in escaping the local minima, their speed is not emulated the mathematical ones at all. In this paper, we employed a mathematical framework based on the Newton method. This framework is suitable for any efficient cost function. Yet we used the sum of square difference (SSD). We also provided an effective strategy in order to avoid sticking in the local minima. Results: The proposed newton method with SSD as a cost function expresses more decent speed and accuracy in comparison to Gradient descent and genetic algorithms methods based on presented criteria. By considering SSD as the model cost function, the proposed method is able to introduce, respectively, accurate and fast registration method which could be exploited by the relevant applications. Simulation results indicate the effectiveness of the proposed model. Conclusion: The proposed innovative method based on the Newton optimization technique on separate cost functions is able to outperform regular Gradient descent and genetic algorithms. The presented framework is not based on any specific cost function, so any innovative cost functions could be effectively employed by our approach. Whether the objective is to reach accurate or fast results, the proposed method could be investigated accordingly.======================================================================================================Copyrights©2018 The author(s). This is an open access article distributed under the terms of the Creative Commons Attribution (CC BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, as long as the original authors and source are cited. No permission is required from the authors or the publishers.======================================================================================================