Original Research Paper
Artificial Intelligence
R. Mohammadi Farsani; E. Pazouki
Abstract
Background and Objectives: Many real-world problems are time series forecasting (TSF) problem. Therefore, providing more accurate and flexible forecasting methods have always been a matter of interest to researchers. An important issue in forecasting the time series is the predicated time interval.Methods: ...
Read More
Background and Objectives: Many real-world problems are time series forecasting (TSF) problem. Therefore, providing more accurate and flexible forecasting methods have always been a matter of interest to researchers. An important issue in forecasting the time series is the predicated time interval.Methods: In this paper, a new method is proposed for time series forecasting that can make more accurate predictions at larger intervals than other existing methods. Neural networks are an effective tool for estimating time series due to their nonlinearity and their ability to be used for different time series without specific information of those. A variety of neural networks have been introduced so far, some of which have been used in forecasting time series. Encoder decoder Networks are an example of networks that can be used in time series forcasting. an encoder network encodes the input data based on a particular pattern and then a decoder network decodes the output based on the encoded input to produce the desired output. Since these networks have a better understanding of the context, they provide a better performance. An example of this type of network is transformer. A transformer neural network based on the self-attention is presented that has special capability in forecasting time series problems.Results: The proposed model has been evaluated through experimental results on two benchmark real-world TSF datasets from different domain. The experimental results states that, in terms of long-term estimation Up to eight times more resistant and in terms of estimation accuracy about 20 percent improvement, compare to other well-known methods, is obtained. Computational complexity has also been significantly reduced.Conclusion: The proposed tool could perform better or compete with other introduced methods with less computational complexity and longer estimation intervals. It was also found that with better configuration of the network and better adjustment of attention, it is possible to obtain more desirable results in any specific problem.
Original Research Paper
Target Detection
M. Imani
Abstract
Background and Objectives: Target detection is one of the main applications of remote sensing. Multispectral (MS) images with higher spatial resolution than hyperspectral images are an important source for shape and geometric characterization, and so, MS target detection is interested.Methods: A target ...
Read More
Background and Objectives: Target detection is one of the main applications of remote sensing. Multispectral (MS) images with higher spatial resolution than hyperspectral images are an important source for shape and geometric characterization, and so, MS target detection is interested.Methods: A target detector appropriate for multispectral (MS) images is selected among hyperspectral target detectors and redefined in this paper. Many target detectors have been proposed for hyperspectral images in the remote sensing filed. Most of these detectors just use the spectral information. Since, the MS images have higher spatial resolution compared to hyperspectral ones, it is proposed that select a target detector that uses both of the spectral and spatial features. To this end, the attribute profile based collaborative representation (AP-CR) hyperspectral detector is chosen for MS images. Shape structures extracted by flexible attribute filters can significantly improve the MS target detection.Results: As a case study, the wheat fields in Chenaran County in Iran are chosen as targets to be detected. The image acquired by Landsat 8 is used for doing experiment. The results show the superior performance of AP-CR with 96.09 % accuracy for wheat detection using MS image of Landsat 8.Conclusion: The high performance of AP-CR is due to extraction of flexible attribute characteristics and the use of collaborative representation for approximation of each image pixel. Although the AP-CR method provides the highest accuracy, it needs a high running time compared to other detectors.
Original Research Paper
IoT Security
S. Saderi Oskuiee; F. Moazami; G. Oudi Ghadim
Abstract
Background and Objectives: Radio Frequency Identification (RFID) systems use radio frequency waves to exchange information between a legitimate sender and a receiver. One of the important features of RFID systems is to find and track a specific tag among a large number of tags. Numerous works have been ...
Read More
Background and Objectives: Radio Frequency Identification (RFID) systems use radio frequency waves to exchange information between a legitimate sender and a receiver. One of the important features of RFID systems is to find and track a specific tag among a large number of tags. Numerous works have been done about authentication and ownership protocols, but the number of researches done in the tag searching area is much less. Although security is a paramount factor in search protocols, but these days designers are looking for a secure search protocol that is also low cost. One way to have a low cost search protocol is that to be compatible with EPC C1G2 standard, which is an electronic product code class 1 generation 2 that works in the 860-960 MHz frequency range.Methods: Most recently, Sundaresan et al. have proposed an RFID tag search protocol based on quadratic residues and 128 bit pseudo random number generators and XOR operation that can be easily implemented on passive tags and is compatible with EPC C1G2 standard. We show that this protocol is not immune against tag tracing, and try to improve the protocol in a way that traceability attack will not be applicable and the protocol stays low cost and EPC compatible.Results: Since the problem in Sundaresan et al.'s search protocol is due to the tag not being able to recognize the used queries from the new ones, we improved the protocol using a counter within the queries, so the tag will realize that the query is used or not. Then we analyze the security of the improved protocol and prove its formal and informal security against known attacks.Conclusion: In this paper, we firstly analyze the security of Sundaresan et al.'s search protocol and show that the search protocol is vulnerable to traceability attack with two different scenarios. Then we propose an improved search protocol that is secure against tracing the tags. Following that, we analyze the security of the improved search protocol.
Original Research Paper
Computer Vision
M. Taheri; M. Rastgarpour; A. Koochari
Abstract
Background and Objectives: medical image Segmentation is a challenging task due to low contrast between Region of Interest and other textures, hair artifacts in dermoscopic medical images, illumination variations in images like Chest-Xray and various imaging acquisition conditions.Methods: In ...
Read More
Background and Objectives: medical image Segmentation is a challenging task due to low contrast between Region of Interest and other textures, hair artifacts in dermoscopic medical images, illumination variations in images like Chest-Xray and various imaging acquisition conditions.Methods: In this paper, we have utilized a novel method based on Convolutional Neural Networks (CNN) for medical image Segmentation and finally, compared our results with two famous architectures, include U-net and FCN neural networks. For loss functions, we have utilized both Jaccard distance and Binary-crossentropy and the optimization algorithm that has used in this method is SGD+Nestrov algorithm. In this method, we have used two preprocessing include resizing image’s dimensions for increasing the speed of our process and Image augmentation for improving the results of our network. Finally, we have implemented threshold technique as postprocessing on the outputs of neural network to improve the contrast of images. We have implemented our model on the famous publicly, PH2 Database, toward Melanoma lesion segmentation and chest Xray images because as we have mentioned, these two types of medical images contain hair artifacts and illumination variations and we are going to show the robustness of our method for segmenting these images and compare it with the other methods.Results: Experimental results showed that this method could outperformed two other famous architectures, include Unet and FCN convolutional neural networks. Additionally, we could improve the performance metrics that have used in dermoscopic and Chest-Xray segmentation which used before.Conclusion: In this work, we have proposed an encoder-decoder framework based on deep convolutional neural networks for medical image segmentation on dermoscopic and Chest-Xray medical images. Two techniques of image augmentation, image rotation and horizontal flipping on the training dataset are performed before feeding it to the network for training. The predictions produced from the model on test images were postprocessed using the threshold technique to remove the blurry boundaries around the predicted lesions.
Original Research Paper
Video Processing
A. Akbari; H. Farsi; S. Mohamadzadeh
Abstract
Background and Objectives: Video processing is one of the essential concerns generally regarded over the last few years. Social group detection is one of the most necessary issues in crowd. For human-like robots, detecting groups and the relationship between members in groups are important. Moving in ...
Read More
Background and Objectives: Video processing is one of the essential concerns generally regarded over the last few years. Social group detection is one of the most necessary issues in crowd. For human-like robots, detecting groups and the relationship between members in groups are important. Moving in a group, consisting of two or more people, means moving the members of the group in the same direction and speed. Methods: Deep neural network (DNN) is applied for detecting social groups in the proposed method using the parameters including Euclidean distance, Proximity distance, Motion causality, Trajectory shape, and Heat-maps. First, features between pairs of all people in the video are extracted, and then the matrix of features is made. Next, the DNN learns social groups by the matrix of features.Results: The goal is to detect two or more individuals in social groups. The proposed method with DNN and extracted features detect social groups. Finally, the proposed method’s output is compared with different methods.Conclusion: In the latest years, the use of deep neural networks (DNNs) for learning and detecting has been increased. In this work, we used DNNs for detecting social groups with extracted features. The indexing consequences and the outputs of movies characterize the utility of DNNs with extracted features.
Original Research Paper
Fault Detection
H. Yektamoghadam; A. Nikoofard
Abstract
Background and Objective: Human health is an issue that always been a priority for scientists, doctors, medical engineers, and others. A Wireless Body Area Network (WBAN) connects independent nodes (e.g. sensors and actuators) that are situated in the clothes, on the body, or under the skin of a person. ...
Read More
Background and Objective: Human health is an issue that always been a priority for scientists, doctors, medical engineers, and others. A Wireless Body Area Network (WBAN) connects independent nodes (e.g. sensors and actuators) that are situated in the clothes, on the body, or under the skin of a person. In the 21st century, advent the technology in different aspects of human life caused WBAN has a special value in future medical technology. Energy harvesting from the ambient or human body for self-independent from the battery or power supply is an important issue in WBAN. Photovoltaic energy harvesting (PVEH), piezoelectric energy harvesting (PEH), RF energy harvesting (RFEH), and thermal electric energy harvesting (TEH) are some techniques used for energy harvesting in WBAN. Fault detection and diagnosis is an important problem in engineering. Engineers and researchers are always trying to find better ways to identify, detection, and control the fault in different systems.Methods: We consider a thermal electric generator (TEG) for measurement energy harvested from the human body and power generation on people at different ambient conditions. Also, we used data reduction methods including principle component analysis (PCA), linear discriminant analysis (LDA), and neural network methods including PCA and MLP, LDA and MLP, Dynamic PCA and MLP, Dynamic LDA and MLP to fault detection for thermal electric generator (TEG). Results: This study shows different data reduction algorithms, in the case studied in this paper, can detect well and nonlinear methods have a more accurate answer than linear methods but implementing the linear methods is easier.Conclusion: According to simulation results, all the methods discussed in this paper are acceptable for fault detection. In this paper, we introduce data reduction linear and nonlinear algorithms as new methods for fault detection in WBAN.
Original Research Paper
Computer Architecture
M. Mosayebi; M. Dehyadegari
Abstract
Background and Objectives: Graph processing is increasingly gaining attention during era of big data. However, graph processing applications are highly memory intensive due to nature of graphs. Processing-in-memory (PIM) is an old idea which revisited recently with the advent of technology specifically ...
Read More
Background and Objectives: Graph processing is increasingly gaining attention during era of big data. However, graph processing applications are highly memory intensive due to nature of graphs. Processing-in-memory (PIM) is an old idea which revisited recently with the advent of technology specifically the ability to manufacture 3D stacked chipsets. PIM puts forward to enrich memory units with computational capabilities to reduce the cost of data movement between processor and memory system.This approach seems to be a way of dealing with large-scale graph processing, considering recent advances in the field.Methods: This paper explores real-world PIM technology to improve graph processing efficiency by reducing irregular access patterns and improving temporal locality using HMC.We propose NodeFetch, a new method to access nodes and their neighbors while processing a graph by adding a new command to HMC system.Results: Results of our simulation on a set of real-world graphs point out that the proposed idea can achieve 3.3x speed up in average and 69% reduction of energy consumption over the baseline PIM architecture which is HMC.Conclusion: Most of the techniques in the field of processing-in-memory, hire methods to reduce movement of data between processor and memory. This paper proposes a method to reduce graph processing execution time and energy consumption by reducing cache misses while processing a graph.
Original Research Paper
Artificial Intelligence
S. Tabatabaei; H. Nosrati Nahook
Abstract
Background and Objectives: With the recent progressions in wireless communication technology, powerful and costless wireless receivers are used in a variety of mobile applications. Mobile networks are a self-arranged network, which is including of mobile nodes that communicate with each other without ...
Read More
Background and Objectives: With the recent progressions in wireless communication technology, powerful and costless wireless receivers are used in a variety of mobile applications. Mobile networks are a self-arranged network, which is including of mobile nodes that communicate with each other without a central control Mobile networks gained considerable attention due to the adaptability, scalability, and costs reduction. Routing and power consumption is a major problem in mobile networks because the network topology changes frequently. Mobile wireless networks suffer from high error rates, power constraints, and limited bandwidth. Due to the high importance of routing protocols in dynamic multi-hop networks, many researchers have paid attention to the routing problem in Mobile Ad hoc Networks (MANET). This paper proposes a new routing algorithm in MANETs which is based upon the Cuckoo optimization algorithm (COA).Methods: COA is inspired by the lifestyle of a family of birds called cuckoo. These birds’ lifestyle, egg-laying features, and breeding are the basis of the development of this optimization algorithm. COA is started by an initial population. There are two types of population of cuckoos in different societies: mature cuckoos and eggs. This algorithm tries to find more stable links for routing.Results: Simulation results prove the high performance of proposed work in terms of throughput, delay, hop count, and discovery time.Conclusion: The cuckoo search convergence is based on the establishment of the Markov chain model to prove that it satisfies the two conditions of the global convergence in a random search algorithm. Also, the cuckoo search that suitable for solving continuous problems and multi-objective problems. We have done a lot of experiments to verify the performance of the Cuckoo algorithm for routing in MANETs. The result of experiments shows the superiority of the proposed method against a well-known AODV algorithm.
Original Research Paper
Power
P. Naderi; B. Ehsan Maleki; H. Beiranvand
Abstract
Background and Objectives: In this paper, a novel objective function is proposed for designing the power system stabilizers (PSSs). Although the object of the previous designs was to enhance the critical modes' stability, the derived stability indices were, to some extent, low and in some cases not acceptable ...
Read More
Background and Objectives: In this paper, a novel objective function is proposed for designing the power system stabilizers (PSSs). Although the object of the previous designs was to enhance the critical modes' stability, the derived stability indices were, to some extent, low and in some cases not acceptable at all. The prospect of attaining higher stability motivated authors to design a new objective function in this study. In all the previous objective functions, the same priority is accorded to all modes, and an objective function is generally defined. A novel function is presented, called Variable Slope Damping Scale (VSDS), based on the assumed variable slope for the straight line in the fan-shaped region, which is an area in the complex plane for determining the eigenvalue placement range, with a reference tip at the negative point. This can be an efficient solution to the low value of critical modes' stability. In general, more damping for critical modes and lower priority for searching non-critical modes are taken as key points. The result of applying VSDS leads to a high value of damping scales for critical modes. The nonlinear simulation results and eigenvalues analysis has demonstrated that the proposed approach in this study is highly effective in damping the most critical modes.Methods: The proposed method assumes a variable slope for the straight line of the convergence region (specified area for placement of poles) in a fan-shaped type. Indeed, the increase in critical mode's damping scale is taken into account as a key point to introduce a powerful objective function.Results: The value of the damping scale and also the overall dynamic stability of the test system has increased by using the proposed objective function.Conclusion: Also, it has been shown that a variable slope convergence region is better than that of a constant slope one to the optimal tuning of WAPSS. In other words, the value of the damping scale with the proposed method over the existing techniques clearly shows that the proposed objective function is more effective than the other ones.
Original Research Paper
Image Processing
A. Mohammadi Anbaran; P. Torkzadeh; R. Ebrahimpour; N. Bagheri
Abstract
Background and Objectives: Programmable logic devices, such as Field Programmable Gate Arrays, are well-suited for implementing biologically-inspired visual processing algorithms and among those algorithms is HMAX model. This model mimics the feedforward path of object recognition in the visual cortex. ...
Read More
Background and Objectives: Programmable logic devices, such as Field Programmable Gate Arrays, are well-suited for implementing biologically-inspired visual processing algorithms and among those algorithms is HMAX model. This model mimics the feedforward path of object recognition in the visual cortex. Methods: HMAX includes several layers and its most computation intensive stage could be the S1 layer which applies 64 2D Gabor filters with various scales and orientations on the input image. A Gabor filter is the product of a Gaussian window and a sinusoid function. Using the separability property in the Gabor filter in the 0° and 90° directions and assuming the isotropic filter in the 45° and 135° directions, a 2D Gabor filter converts to two more efficient 1D filters.Results: The current paper presents a novel hardware architecture for the S1 layer of the HMAX model, in which a 1D Gabor filter is utilized twice to create a 2D filter. Using the even or odd symmetry properties in the Gabor filter coefficients reduce the required number of multipliers by about 50%. The normalization value in every input image location is also calculated simultaneously. The implementation of this architecture on the Xilinx Virtex-6 family shows a 2.83ms delay for a 128×128 pixel input image that is a 1.86X-speedup relative to the last best implementation.Conclusion: In this study, a hardware architecture is proposed to realize the S1 layer of the HMAX model. Using the property of separability and symmetry in filter coefficients saves significant resources, especially in DSP48 blocks.
Original Research Paper
Channel Allocation for Power Line Communications
M. Sheikh Hosseini; S. M. Nosratabadi
Abstract
Background and Objectives: Broadband power line communications (PLC) is a promising candidate for implementing access network of different telecommunication Technologies. Planning process of the PLC access network is subdivided into two main optimization problems of the generalized base station placement ...
Read More
Background and Objectives: Broadband power line communications (PLC) is a promising candidate for implementing access network of different telecommunication Technologies. Planning process of the PLC access network is subdivided into two main optimization problems of the generalized base station placement and PLC channel allocation. Methods: This paper studies the latter one for an actual PLC network by taking both in-line and in-space neighboring schemes into account for the first time and modeling the PLC channel allocation according to them. In this regards, different aspects of this problem are first introduced in details and then our suggested models for them are presented and numerically evaluated. Results: Specifically, for each pair of the broadband-PLC cells, in-line neighboring is modeled either by one or zero indicating the cells are neighbor or not; in-space neighboring is suggested to be a number from the interval [0 1] according to physical vicinity of cell’s wirings; and consequently aggregate neighboring intensity will be a number from [0 2]. Subsequently, the network interference is defined as a function of neighboring intensity and assigned frequency sets to the neighbor cells; so that the more neighboring intensity is increased and the more distance between the sets is decreased, the more interference is imposed on the PLC network. Eventually, the meta-heuristic methods of Genetic and shuffled frog-leaping algorithms are exploited to solve resulting PLC channel allocation problem via minimizing the interference. Conclusion: In general, the results confirmed the success of the suggested method in modeling PLC channel allocation problem in actual scenarios, tracking the network interference in these situations, providing an optimal solution for them, and including all previous research as a comprehensive method.
Original Research Paper
Compress Sensing
Z. Habibi; H. Zayyani; M. Shams Esfandabadi
Abstract
Background and Objectives: Compressive sensing (CS) theory has been widely used in various fields, such as wireless communications. One of the main issues in the wireless communication field in recent years is how to identify block-sparse systems. We can follow this issue, by using CS theory and block-sparse ...
Read More
Background and Objectives: Compressive sensing (CS) theory has been widely used in various fields, such as wireless communications. One of the main issues in the wireless communication field in recent years is how to identify block-sparse systems. We can follow this issue, by using CS theory and block-sparse signal recovery algorithms.Methods: This paper presents a new block-sparse signal recovery algorithm for the adaptive block-sparse system identification scenario, named stochastic block normalized iterative hard thresholding (SBNIHT) algorithm. The proposed algorithm is a new block version of the SSR normalized iterative hard thresholding (NIHT) algorithm with an adaptive filter framework. It uses a search method to identify the blocks of the impulse response of the unknown block-sparse system that we wish to estimate. In addition, the necessary condition to guarantee the convergence for this algorithm is derived in this paper.Results: Simulation results show that the proposed SBNIHT algorithm has a better performance than other algorithms in the literature with respect to the convergence and tracking capability.Conclusion: In this study, one new greedy algorithm is suggested for the block-sparse system identification scenario. Although the proposed SBNIHT algorithm is more complex than other competing algorithms but has better convergence and tracking capability performance.