Original Research Paper
Computational Intelligence
Z. K. Pourtaheri
Abstract
Background and Objectives: According to this fact that a typical autonomous underwater vehicle consumes energy for rotating, smoothing the path in the process of path planning will be especially important. Moreover, given the inherent randomness of heuristic algorithms, stability analysis of heuristic ...
Read More
Background and Objectives: According to this fact that a typical autonomous underwater vehicle consumes energy for rotating, smoothing the path in the process of path planning will be especially important. Moreover, given the inherent randomness of heuristic algorithms, stability analysis of heuristic path planners assumes paramount importance.Methods: The novelty of this paper is to provide an optimal and smooth path for autonomous underwater vehicles in two steps by using two heuristic optimization algorithms called Inclined Planes system Optimization algorithm and genetic algorithm; after finding the optimal path by Inclined Planes system Optimization algorithm in the first step, the genetic algorithm is employed to smooth the path in the second step. Another novelty of this paper is the stability analysis of the proposed heuristic path planner according to the stochastic nature of these algorithms. In this way, a two-level factorial design is employed to attain the stability goals of this research.Results: Utilizing a Genetic algorithm in the second step of path planning offers two advantages; it smooths the initially discovered path, which not only reduces the energy consumption of the autonomous underwater vehicle but also shortens the path length compared to the one obtained by the Inclined Planes system optimization algorithm. Moreover, stability analysis helps identify important factors and their interactions within the defined objective function.Conclusion: This proposed hybrid method has implemented for three different maps; 36.77%, 48.77%, and 50.17% improvements in the length of the path are observed in the three supposed maps while smoothing the path helps robots to save energy. These results confirm the advantage of the proposed process for finding optimal and smooth paths for autonomous underwater vehicles. Due to the stability results, one can discover the magnitude and direction of important factors and the regression model.
Original Research Paper
Nonlinear Control
M. Ghalehnoie; A. Azhdari; J. Keighobadi
Abstract
Background and Objectives: The two-axis inertially stabilized platforms (ISPs) face various challenges such as system nonlinearity, parameter fluctuations, and disturbances which makes the design process more complex. To address these challenges effectively, the main objective of this paper is to realize ...
Read More
Background and Objectives: The two-axis inertially stabilized platforms (ISPs) face various challenges such as system nonlinearity, parameter fluctuations, and disturbances which makes the design process more complex. To address these challenges effectively, the main objective of this paper is to realize the stabilization of ISPs by presenting a new robust model-free control scheme.Methods: In this study, a robust adaptive fuzzy control approach is proposed for two-axis ISPs. The proposed approach leverages the backstepping method as its foundational design mechanism, employing fuzzy systems to approximate unknown terms within the control framework. Furthermore, the control architecture incorporates a model-free disturbance observer, enhancing the system's robustness and performance. Additionally, novel adaptive rules are devised, and the uniform ultimate boundedness stability of the closed-loop system is rigorously validated using the Lyapunov theorem.Results: Using MATLAB/Simulink software, simulation results are obtained for the proposed control system and its performance is assessed in comparison with related research works across two scenarios. In the first scenario, where both the desired and initial attitude angles are set to zero, the proposed method demonstrates a substantial mean squared error (MSE) reduction: 96.2% for pitch and 86.7% for yaw compared to the backstepping method, and reductions of 75% for pitch and 33.3% for yaw compared to the backstepping sliding mode control. In the second scenario, which involves a 10-degree step input, similar improvements are observed alongside superior performance in terms of reduced overshoot and settling time. Specifically, the proposed method achieves a settling time for the pitch gimbal 56.6% faster than the backstepping method and 58% faster for the yaw gimbal. Moreover, the overshoot for the pitch angle is reduced by 53.5% compared to backstepping and 35.5% compared to backstepping sliding mode control, while for the yaw angle, reductions of 43.6% and 37.6% are achieved, respectively.Conclusion: Through comprehensive simulation studies, the efficacy of the proposed algorithm is demonstrated, showcasing its superior performance compared to conventional control methods. Specifically, the proposed method exhibits notable improvements in reducing maximum deviation from desired angles, mean squared errors, settling time, and overshoot, outperforming both backstepping and backstepping sliding mode control methods.
Original Research Paper
Natural Language Processing
M. Khazeni; M. Heydari; A. Albadvi
Abstract
Background and Objectives: The lack of a suitable tool for the analysis of conversational texts in Persian language has made various analyzes of these texts, including Sentiment Analysis, difficult. In this research, it has we tried to make the understanding of these texts easier for the machine by providing ...
Read More
Background and Objectives: The lack of a suitable tool for the analysis of conversational texts in Persian language has made various analyzes of these texts, including Sentiment Analysis, difficult. In this research, it has we tried to make the understanding of these texts easier for the machine by providing PSC, Persian Slang Convertor, a tool for converting conversational texts into formal ones, and by using the most up-to-date and best deep learning methods along with the PSC, the sentiment learning of short Persian language texts for the machine in a better way.Methods: Be made More than 10 million unlabeled texts from various social networks and movie subtitles (as dialogue texts) and about 10 million news texts (as official texts) have been used for training unsupervised models and formal implementation of the tool. 60,000 texts from the comments of Instagram social network users with positive, negative, and neutral labels are considered as supervised data for training the emotion classification model of short texts. The latest methods such as LSTM, CNN, BERT, ELMo, and deep processing techniques such as learning rate decay, regularization, and dropout have been used. LSTM has been utilized in the research, and the best accuracy has been achieved using this method.Results: Using the official tool, 57% of the words of the corpus of conversation were converted. Finally, by using the formalizer, FastText model and deep LSTM network, the accuracy of 81.91 was obtained on the test data.Conclusion: In this research, an attempt was made to pre-train models using unlabeled data, and in some cases, existing pre-trained models such as ParsBERT were used. Then, a model was implemented to classify the Sentiment of Persian short texts using labeled data.
Original Research Paper
Analogue Integrated Circuits
A. Yaseri; M. H. Maghami; M. Radmehr
Abstract
Background and Objectives: In recent years, the electronics industry has experienced rapid expansion, leading to increased concerns surrounding the expenses associated with designing and sizing integrated circuits. The reliability of these circuits has emerged as a critical factor influencing the success ...
Read More
Background and Objectives: In recent years, the electronics industry has experienced rapid expansion, leading to increased concerns surrounding the expenses associated with designing and sizing integrated circuits. The reliability of these circuits has emerged as a critical factor influencing the success of production. Consequently, the necessity for optimization algorithms to enhance circuit yield has become increasingly important. This article introduces an enhanced approach for optimizing analog circuits through the utilization of a Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D) and includes a thorough evaluation. The main goal of this methodology is to improve both the speed and precision of yield calculations.Methods: The proposed approach includes generating initial designs with desired characteristics in the critical analysis phase. Following this, designs that exceed a predefined yield threshold are replaced with the initial population that has lower yield values, generated using the classical MOEA/D algorithm. This replacement process results in notable improvements in yield efficiency and computational speed compared to alternative Monte Carlo-based methods.Results: To validate the effectiveness of the presented approach, some circuit simulations were conducted on a two-stage class-AB Op-Amp in 180 nm CMOS technology. With a high yield value of 99.72%, the approach demonstrates its ability to provide a high-speed and high-accuracy computational solution using only one evolutionary algorithm. Additionally, the observation that modifying the initial population can improve the convergence speed and yield value further enhances the efficiency of the technique. These findings, backed by the simulation results, validate the efficiency and effectiveness of the proposed approach in optimizing the performance of the Op-Amp circuit.Conclusion: This paper presents an enhanced approach for analog circuit optimization using MOEA/D. By incorporating critical analysis, it generates initial designs with desired characteristics, improving yield calculation efficiency. Designs exceeding a preset yield threshold are replaced with lower yield ones from the initial population, resulting in enhanced computational speed and accuracy compared to other Monte Carlo-based methods. Simulation results for a two-stage class-AB Op-Amp in 180 nm CMOS technology show a yield of 99.72%, highlighting the method's effectiveness in achieving high speed and accuracy with a single evolutionary algorithm.
Original Research Paper
Compress Sensing
M. Kalantari
Abstract
Background and Objectives: Compressed sensing (CS) of analog signals in shift-invariant spaces can be used to reduce the complexity of the matched-filter (MF) receiver, in which we can be approached the standard MF performance with fewer filters. But, with a small number of filters the performance degrades ...
Read More
Background and Objectives: Compressed sensing (CS) of analog signals in shift-invariant spaces can be used to reduce the complexity of the matched-filter (MF) receiver, in which we can be approached the standard MF performance with fewer filters. But, with a small number of filters the performance degrades quite rapidly as a function of SNR. In fact, the CS matrix aliases all the noise components, therefore the noise increases in the compressed measurements. This effect is referred to as noise folding. In this paper, an approach for compensating the noise folding effect is proposed. Methods: An approach for compensating of this effect is to use a sufficient number of filters. In this paper the aim is to reach the better performance with the same number of filter as in the previous work. This, can be approached using a weighting function embedded in the analog signal compressed sensing structure. In fact, using this weighting function we can remedy the effect of CS matrix on the noise variance. Results: Comparing with the approach based on using the sufficient number of filters to counterbalance the noise increase, experimental results show that with the same numbers of filters, in terms of probability of correct detection, the proposed approach remarkably outperforms the rival’s.Conclusion: Noise folding formation is the main factor in CS-based matched-filter receiver. The method previously presented to reduce this effect demanded using the sufficient number of filters which comes at a cost. In this paper we propose a new method based on using the weighting function embedded in the analog signal compressed sensing structure to achieve better performance.
Original Research Paper
Data Preprocessing
S. Mahmoudikhah; S. H. Zahiri; I. Behravan
Abstract
Background and Objectives: Sonar data processing is used to identify and track targets whose echoes are unsteady. So that they aren’t trusty identified in typical tracking methods. Recently, RLA have effectively cured the accuracy of undersea objective detection compared to conventional sonar objective ...
Read More
Background and Objectives: Sonar data processing is used to identify and track targets whose echoes are unsteady. So that they aren’t trusty identified in typical tracking methods. Recently, RLA have effectively cured the accuracy of undersea objective detection compared to conventional sonar objective cognition procedures, which have robustness and low accuracy. Methods: In this research, a combination of classifiers has been used to improve the accuracy of sonar data classification in complex problems such as identifying marine targets. These classifiers each form their pattern on the data and store a model. Finally, a weighted vote is performed by the LA algorithm among these classifiers, and the classifier that gets the most votes is the classifier that has had the greatest impact on improving performance parameters.Results: The results of SVM, RF, DT, XGboost, ensemble method, R-EFMD, T-EFMD, R-LFMD, T-LFMD, ANN, CNN, TIFR-DCNN+SA, and joint models have been compared with the proposed model. Considering that the objectives and databases are different, we benchmarked the average detection rate. In this comparison, Precision, Recall, F1_Score, and Accuracy parameters have been considered and investigated in order to show the superior performance of the proposed method with other methods.Conclusion: The results obtained with the analytical parameters of Precision, Recall, F1_Score, and Accuracy compared to the latest similar research have been examined and compared, and the values are 87.71%, 88.53%, 87.8%, and 87.4% respectively for each of These parameters are obtained in the proposed method.
Original Research Paper
Artificial Intelligence
S. S. Musavian; A. Taghizade; F. Z. Ahmadi; S. Norouzi
Abstract
Background and Objectives: The purpose of this study is to propose a solution for using large fuzzy sets in assessment tasks with a significant number of items, focusing on the assessment of media and educational tools. Ensuring fairness is crucial in evaluation tasks, especially when different evaluators ...
Read More
Background and Objectives: The purpose of this study is to propose a solution for using large fuzzy sets in assessment tasks with a significant number of items, focusing on the assessment of media and educational tools. Ensuring fairness is crucial in evaluation tasks, especially when different evaluators assign different ratings to the same process or their ratings may even vary in different situations. Also, previous non-fuzzy assessment methods show that the mean value of assessors scores is not a good representation when the variance of scores is significant. Fuzzy evaluation methods can solve this problem by addressing the uncertainty in evaluation tasks. Although some studies have been conducted on fuzzy assessment, but their main focus is fuzzy calculations and no solution has been proposed for the problem arising when fuzzy rule set is considerably huge. Methods: Fuzzy rules are the main key for fuzzy inference. This part of a fuzzy system often is generated by experts. In this study,15 experts were asked to create the set of fuzzy rules. Fuzzy rules relate inputs to outputs by descriptive linguistic expressions. Making these expressions is so more convenient than if we determine an exact relationship between inputs and outputs. The number of fussy rules has an exponential relationship with the number of inputs. Therefore, for a task with more than say 6 inputs, we should deal with a huge set of fuzzy rules. This paper presents a solution that enables the use of large fuzzy sets in fuzzy systems using a multi-stage hierarchical approach.Results: Justice is always the most important issue in an assessment process. Due to its nature, a fuzzy calculation-based assessment provides an assessment in a just manner. Since many assessment tasks are often involved more than 10 items to be assessed, generating a fuzzy rule set is impossible. Results show the final score is very sensitive to slight differences in score of an item given by assessors. Besides that, assessors often are not able to consider all items simultaneously to assign a coefficient for the effect of each item on final score. This will be seriously a problem when the final score depends on many input items. In this study, we proposed a fuzzy analysis method to ensure equitable evaluation of educational media and instructional tools within the teaching process. Results of none-fuzzy scoring system show that final score has intense variations when assessment is down in different times and by different assessors. It is because of the manner that importance coefficients are calculated for each item of assessment. In fuzzy assessment no importance coefficient is used for each item.Conclusion: In this study, a novel method was proposed to determine the score of an activity, a task, or a tool that is designed for learning purposes based on Fuzzy sets and their respective calculations. Because of the nature of fuzzy systems, approximate descriptive expressions are used to relate input items to final score instead of an exact function that is impossible to be estimated. Fuzzy method is a robust system that ensure us a fair assessment.
Original Research Paper
Artificial Intelligence
B. Mahdipour; S. H. Zahiri; I. Behravan
Abstract
Background and Objectives: Path planning is one of the most important topics related to the navigation of all kinds of moving vehicles such as airplanes, surface and subsurface vessels, cars, etc. Undoubtedly, in the process of making these tools more intelligent, detecting and crossing obstacles without ...
Read More
Background and Objectives: Path planning is one of the most important topics related to the navigation of all kinds of moving vehicles such as airplanes, surface and subsurface vessels, cars, etc. Undoubtedly, in the process of making these tools more intelligent, detecting and crossing obstacles without encountering them by taking the shortest path is one of the most important goals of researchers. Significant success in this field can lead to significant progress in the use of these tools in a variety of applications such as industrial, military, transportation, commercial, etc. In this paper, a metaheuristic-based approach with the introduction of new fitness functions is presented for the problem of path planning for various types of surface and subsurface moving vehicles.Methods: The proposed approach for path planning in this research is based on the metaheuristic methods, which makes use of a novel fitness function. Particle Swarm Optimization (PSO) is the metaheuristic method leveraged in this research but other types of metaheuristic methods can also be used in the proposed architecture for path planning.Results: The efficiency of the proposed method, is tested on two synthetic environments for finding the best path between the predefined origin and destination for both surface and subsurface unmanned intelligent vessels. In both cases, the proposed method was able to find the best path or the closest answer to it.Conclusion: In this paper, an efficient method for the path planning problem is presented. The proposed method is designed using Particle Swarm Optimization (PSO). In the proposed method, several effective fitness function have been defined so that the best path or one of the closest answers can be obtained by utilized metaheuristic algorithm. The results of implementing the proposed method on real and simulated geographic data show its good performance. Also, the obtained quantitative results (time elapsed, success rate, path cost, standard deviation) have been compared with other similar methods. In all of these measurements, the proposed algorithm outperforms other methods or is comparable to them.
Original Research Paper
Power Electronics
P. Hamedani
Abstract
Background and Objectives: To overcome the disadvantages of the traditional two-level inverters, especially in electric drive applications, multi-level inverters (MLIs) are the widely accepted solution. Diode-Clamped Inverters (DCIs) are a well-known structure of multi-level inverters. In DCIs, the voltage ...
Read More
Background and Objectives: To overcome the disadvantages of the traditional two-level inverters, especially in electric drive applications, multi-level inverters (MLIs) are the widely accepted solution. Diode-Clamped Inverters (DCIs) are a well-known structure of multi-level inverters. In DCIs, the voltage balance of the DC-link capacitors and the Common Mode (CM) voltage reduction are two important criteria that should be considered. Methods: This paper concentrates on the current control of 3-phase 4-level DCI with finite control set model predictive control (MPC) strategy. Current tracking performance, DC-link capacitor voltage balance, switching frequency minimization, and CM voltage control have been considered in the objective function of the MPC. Moreover, the multistep prediction method has been applied to improve the performance of the DCI. Results: The effectiveness of the proposed multistep prediction control for the 4-level DCI has been evaluated with different horizon lengths. Moreover, the effect of several values of weighting factors has been studied on the system behavior. Conclusion: Results validate the accuracy of current tracking and voltage balancing in the suggested multistep MPC for the 4-level DCI. In addition, CM voltage control and switching frequency reduction can be included in the predictive control. Decreasing the CM voltage and switching frequency will oppositely affect the dynamic behavior and voltage balancing of the DCI. Therefore, selection of weighting factors depends on the system needs and requirements.
Original Research Paper
Power Electronics
F. Sedaghati; S. A. Azimi
Abstract
Background and Objectives: {R1-1} Increasing environmental problems have led to the spread of Electric Vehicles (EVs). One of the attractive research fields of electric vehicles is the charging battery of this strategic product. Electric vehicle battery chargers often lack bidirectional power flow and ...
Read More
Background and Objectives: {R1-1} Increasing environmental problems have led to the spread of Electric Vehicles (EVs). One of the attractive research fields of electric vehicles is the charging battery of this strategic product. Electric vehicle battery chargers often lack bidirectional power flow and the flexibility to handle a wide range of battery voltages. This study proposes a non-isolated bidirectional DC-DC converter connected to a T-type converter with a reduced number of switches to solve this limitation.Methods: The proposed converter uses a DC-DC converter that has an interleaved structure along with a three-level T-type converter with a reduced number of switches and a common ground for the input and output terminals. Space vector pulse width modulation (SVPWM) and carrier based sinusoidal pulse width modulation (CBPWM) control the converter for Vehicle to grid (V2G) and grid to Vehicle (G2V) operation, respectively.Results: Theoretical analysis shows 96.9% efficiency for 15.8kW output power and 3.06% THD during charging with low battery voltage ripple. In V2G mode, it achieves an efficiency of 96.5% while injecting 0.5 kW of power into the 380 V 50 Hz grid. The DC link voltage is stabilized. The proposed converter also provides good performance for a wide range of battery development.Conclusion: The proposed converter offers high efficiency and cost reduction. It provides the possibility of charging a wide range of batteries and provides V2G and G2V power flow performance. The proposed converter is capable of being placed in the fast battery charging category. The ability to charge two batteries makes it a suitable option for charging stations.
Original Research Paper
Bioelectric
Z. Rabiei; H. Montazery Kordy
Abstract
Background and Objectives: Neuroscience research can benefit greatly from the fusion of simultaneous recordings of electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) data due to their complementary properties. We can extract shared information by coupling two modalities in a ...
Read More
Background and Objectives: Neuroscience research can benefit greatly from the fusion of simultaneous recordings of electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) data due to their complementary properties. We can extract shared information by coupling two modalities in a symmetric data fusion.Methods: This paper proposed an approach based on the advanced coupled matrix tensor factorization (ACMTF) method for analyzing simultaneous EEG-fMRI data. To alleviate the strict equality assumption of shared factors in the common dimension of the ACMTF, the proposed method used a similarity criterion based on normalized mutual information (NMI). This similarity criterion effectively revealed the underlying relationships between the modalities, resulting in more accurate factorization results.Results: The suggested method was utilized on simulated data with various levels of correlation between the components of the two modalities. Despite different noise levels, the average match score improved compared to the ACMTF model, as demonstrated by the results.Conclusion: By relaxing the strict equality assumption, we can identify shared components in a common mode and extract shared components with higher performance than the traditional methods. The suggested method offers a more robust and effective way to analyze multimodal data sets. The findings highlight the potential of the ACMTF method with NMI-based similarity criterion for uncovering hidden patterns in EEG and fMRI data.
Original Research Paper
Image Processing
S. Fooladi; H. Farsi; S. Mohamadzadeh
Abstract
Background and Objectives: The increasing prevalence of skin cancer highlights the urgency for early intervention, emphasizing the need for advanced diagnostic tools. Computer-assisted diagnosis (CAD) offers a promising avenue to streamline skin cancer screening and alleviate associated costs.Methods: ...
Read More
Background and Objectives: The increasing prevalence of skin cancer highlights the urgency for early intervention, emphasizing the need for advanced diagnostic tools. Computer-assisted diagnosis (CAD) offers a promising avenue to streamline skin cancer screening and alleviate associated costs.Methods: This study endeavors to develop an automatic segmentation system employing deep neural networks, seamlessly integrating data manipulation into the learning process. Utilizing an encoder-decoder architecture rooted in U-Net and augmented by wavelet transform, our methodology facilitates the generation of high-resolution feature maps, thus bolstering the precision of the deep learning model.Results: Performance evaluation metrics including sensitivity, accuracy, dice coefficient, and Jaccard similarity confirm the superior efficacy of our model compared to conventional methodologies. The results showed a accuracy of %96.89 for skin lesions in PH2 Database and %95.8 accuracy for ISIC 2017 database findings, which offers promising results compared to the results of other studies. Additionally, this research shows significant improvements in three metrics: sensitivity, Dice, and Jaccard. For the PH database, the values are 96, 96.40, and 95.40, respectively. For the ISIC database, the values are 92.85, 96.32, and 95.24, respectively.Conclusion: In image processing and analysis, numerous solutions have emerged to aid dermatologists in their diagnostic endeavors The proposed algorithm was evaluated using two PH datasets, and the results were compared to recent studies. Impressively, the proposed algorithm demonstrated superior performance in terms of accuracy, sensitivity, Dice coefficient, and Jaccard Similarity scores when evaluated on the same database images compared to other methods.
Original Research Paper
Power Systems
A. Yazdaninejadi; M. Akhavan
Abstract
Background and Objectives: Protection of sub-transmission systems requires maintaining selectivity in the combinatorial scheme of distance and direction overcurrent relays (DOCRs). This presents a complex challenge that renders the need for a robust solution. Thereby, the objective of the present study ...
Read More
Background and Objectives: Protection of sub-transmission systems requires maintaining selectivity in the combinatorial scheme of distance and direction overcurrent relays (DOCRs). This presents a complex challenge that renders the need for a robust solution. Thereby, the objective of the present study is to decrease the number of violations and minimize the tripping time of relays in this particular issue.Methods: This study deals with this challenge by using numerical DOCRs which follow non-standard tripping characteristics without compromising the compatibility of the curves. In this process, the time-current characteristics of relays are described in such a manner that they can maintain selectivity among themselves and with distance relays. Therefore, in addition to the second zone timing of distance relays, time dial settings, and plug settings of overcurrent relays, the other coefficients of the inverse-time characteristics are also optimized. The optimization procedure is formulated as a nonlinear programming model and tackled using the particle swarm optimization (PSO) algorithm.Results: This approach is verified by applying on two test systems and compared against conventional methods. The obtained results show that the proposed approach helps to yield selective protection scheme owing to the provided flexibility.Conclusion: The research effectively enhanced selectivity in sub-transmission systems and minimizing relay tripping times through the innovative use of numerical DOCRs and PSO-based optimization.
Original Research Paper
Smart Grid
F. Ahmed Shaban; S. Golshannavaz
Abstract
Background and Objectives: In smart grid paradigm, there exist many versatile applications to be fostered such as smart home, smart buildings, smart hospitals, and so on. Smart hospitals, wherein patients are the possible consumers, are one of the recent interests within this paradigm. The Internet of ...
Read More
Background and Objectives: In smart grid paradigm, there exist many versatile applications to be fostered such as smart home, smart buildings, smart hospitals, and so on. Smart hospitals, wherein patients are the possible consumers, are one of the recent interests within this paradigm. The Internet of Things (IoT) technology has provided a unique platform for healthcare system realization through which the patients’ health-based data is provided and analyzed to launch a continuous patient monitoring and; hence, greatly improving healthcare systems. Methods: Predictive machine learning techniques are fostered to classify health conditions of individuals. The patients’ data is provided from IoT devices and electrocardiogram (ECG) data. Then, efficient data pre-processings are conducted, including data cleaning, feature engineering, ECG signal processing, and class balancing. Artificial intelligence (AI) is deployed to provide a system to learn and automate processes. Five machine learning algorithms, including Support Vector Machine (SVM), Extreme Gradient Boosting (XGBoost), logistic regression, Naive Bayes, and random forest, as the AI engines, are considered to classify health status based on biometric and ECG data. Then, the output would be the most proper signals propagated to doctors’ and nurses’ receivers in regard of the patients providing them by initial pre-judgments for final decisions.Results: Through the conducted analysis, it is shown that logistic regression outperforms the other AI machine learning algorithms with an F1 score, recall, precision, and accuracy of 0.91, followed by XGBoost with 0.88 across all metrics. SVM and Naive Bayes both achieved 0.85 accuracy, while random forest attained 0.86. Moreover, the Receiver Operating Characteristic Area Under Curve (ROC-AUC) scores confirm the robustness of Logistic Regression and XGBoost as apt candidates in learning the developed healthcare system.Conclusion: The conducted study concludes a promising potential of AI-based machine learning algorithms in devising predictive healthcare systems capable of initial diagnosis and preliminary decision makings to be relied upon by the clinician. What is more, the availability of biometric data and the features of the proposed system significantly contributed to primary care assessments.
Original Research Paper
Machine Learning
A. Ahmadi; R. Mahboobi Esfanjani
Abstract
Background and Objectives: A predefined structure is usually employed for deep neural networks, which results in over- or underfitting, heavy processing load, and storage overhead. Training along with pruning can decrease redundancy in deep neural networks; however, it may lead to a decrease in accuracy.Methods: ...
Read More
Background and Objectives: A predefined structure is usually employed for deep neural networks, which results in over- or underfitting, heavy processing load, and storage overhead. Training along with pruning can decrease redundancy in deep neural networks; however, it may lead to a decrease in accuracy.Methods: In this note, we provide a novel approach for structure optimization of deep neural networks based on competition of connections merged with brain-inspired synaptic pruning. The efficiency of each network connection is continuously assessed in the proposed scheme based on the global gradient magnitude criterion, which also considers positive scores for strong and more effective connections and negative scores for weak connections. But a connection with a weak score is not removed quickly; instead, it is eliminated when its net score reaches a predetermined threshold. Moreover, the pruning rate is obtained distinctly for each layer of the network.Results: Applying the suggested algorithm to a neural network model of a distillation column in a noisy environment demonstrates its effectiveness and applicability.Conclusion: The proposed method, which is inspired by connection competition and synaptic pruning in the human brain, enhances learning speed, preserves accuracy, and reduces costs due to its smaller network size. It also handles noisy data more efficiently by continuously assessing network connections
Original Research Paper
Electrical Machines
H. Afsharirad; S. Misaghi
Abstract
Background and Objectives: Due to the high torque ripple and stator current harmonics in direct torque control using a two-level inverter, the use of multi-level inverters has become common to reduce these two factors. Among the multilevel inverters, the Neutral-point-Clamped Inverter has been given ...
Read More
Background and Objectives: Due to the high torque ripple and stator current harmonics in direct torque control using a two-level inverter, the use of multi-level inverters has become common to reduce these two factors. Among the multilevel inverters, the Neutral-point-Clamped Inverter has been given more attention in the industry due to its advantages. This inverter has 27 voltage vectors by which torque and flux are controlled. In order to reduce torque ripple and current harmonic as much as possible, methods such as space vector modulation methods or the use of multi-level inverters with higher levels have been considered. But the main drawback of these methods is the increase of complexity and cost.Methods: In this article, virtual voltage vectors are used to increase the number of hysteresis controller levels. These vectors are obtained from the sum of two voltage vectors. In this way, we will have 12 voltage vectors in addition to the diode clamped inverter’s voltage vectors. Therefore, we can increase the number of torque hysteresis levels from 7 levels to 11 levels.Results: Considering that the proposed method uses virtual vectors and voltage vectors, it does not increase the cost and computational complexity. Also, one of the requirements of using this method is the use of fixed switching frequency, which solves the variable switching frequency problem of conventional methods. Therefore, the proposed control reaches an overall optimization.Conclusion: To verify the feasibility of the proposed method and compare it with the conventional method, both of these methods are simulated in the MATLAB /Simulink environment and the simulation results represent the efficiency of the proposed control method. This method achieves less torque ripple and harmonic current without increasing the cost and computational complexity.
Original Research Paper
Artificial Intelligence
K. Moeenfar; V. Kiani; A. Soltani; R. Ravanifard
Abstract
Background and Objectives: In this paper, a novel and efficient unsupervised machine learning algorithm named EiForestASD is proposed for distinguishing anomalies from normal data in data streams. The proposed algorithm leverages a forest of isolation trees to detect anomaly data instances. Methods: ...
Read More
Background and Objectives: In this paper, a novel and efficient unsupervised machine learning algorithm named EiForestASD is proposed for distinguishing anomalies from normal data in data streams. The proposed algorithm leverages a forest of isolation trees to detect anomaly data instances. Methods: The proposed method EiForestASD incorporates an isolation forest as an adaptable detector model that adjusts to new data over time. To handle concept drifts in the data stream, a window-based concept drift detection is employed that discards only those isolation trees that are incompatible with the new concept. The proposed method is implemented using the Python programming language and the Scikit-Multiflow library.Results: Experimental evaluations were conducted on six real-world and two synthetic data streams. Results reveal that the proposed method EiForestASD reduces computation time by 19% and enhances anomaly detection rate by 9% compared to the baseline method iForestASD. These results highlight the efficacy and efficiency of the EiForestASD in the context of anomaly detection in data streams.Conclusion: The EiForestASD method handles concept change using an intelligent strategy where only those trees from the detector model incompatible with the new concept are removed and reconstructed. This modification of the concept drift handling mechanism in the EiForestASD significantly reduces computation time and improves anomaly detection accuracy.
Original Research Paper
Machine Learning
S. Khonsha; M. A. Sarram; R. Sheikhpour
Abstract
Background and Objectives: Stock recommender system (SRS) based on deep reinforcement learning (DRL) has garnered significant attention within the financial research community. A robust DRL agent aims to consistently allocate some amount of cash to the combination of high-risk and low-risk ...
Read More
Background and Objectives: Stock recommender system (SRS) based on deep reinforcement learning (DRL) has garnered significant attention within the financial research community. A robust DRL agent aims to consistently allocate some amount of cash to the combination of high-risk and low-risk stocks with the ultimate objective of maximizing returns and balancing risk. However, existing DRL-based SRSs focus on one or, at most, two sequential trading agents that operate within the same or shared environment, and often make mistakes in volatile or variable market conditions. In this paper, a robust Concurrent Multiagent Deep Reinforcement Learning-based Stock Recommender System (CMSRS) is proposed.Methods: The proposed system introduces a multi-layered architecture that includes feature extraction at the data layer to construct multiple trading environments, so that different feed DRL agents would robustly recommend assets for trading layer. The proposed CMSRS uses a variety of data sources, including Google stock trends, fundamental data and technical indicators along with historical price data, for the selection and recommendation suitable stocks to buy or sell concurrently by multiple agents. To optimize hyperparameters during the validation phase, we employ Sharpe ratio as a risk adjusted return measure. Additionally, we address liquidity requirements by defining a precise reward function that dynamically manages cash reserves. We also penalize the model for failing to maintain a reserve of cash.Results: The empirical results on the real U.S. stock market data show the superiority of our CMSRS, especially in volatile markets and out-of-sample data.Conclusion: The proposed CMSRS demonstrates significant advancements in stock recommendation by effectively leveraging multiple trading agents and diverse data sources. The empirical results underscore its robustness and superior performance, particularly in volatile market conditions. This multi-layered approach not only optimizes returns but also efficiently manages risks and liquidity, offering a compelling solution for dynamic and uncertain financial environments. Future work could further refine the model's adaptability to other market conditions and explore its applicability across different asset classes.
Original Research Paper
Communications Networks
M. Z. Rahman; J. E. Giti; S. A.H. Chowdhury; M. S Anower
Abstract
Background and Objectives: Node counting is undoubtedly an essential task since it is one of the important parameters to maintain proper functionality of any wireless communications network including undersea acoustic sensor networks (UASNs). In undersea communications networks, protocol-based node counting ...
Read More
Background and Objectives: Node counting is undoubtedly an essential task since it is one of the important parameters to maintain proper functionality of any wireless communications network including undersea acoustic sensor networks (UASNs). In undersea communications networks, protocol-based node counting techniques suffer from poor performance due to the unique propagation characteristics of the medium. To solve the issue of counting nodes of an undersea network, an approach based on cross-correlation (CC) of Gaussian signals has been previously introduced. However, the limited bandwidth (BW) of undersea communication presents a significant challenge to the node counting technique based on CC, which traditionally uses Gaussian signals with infinite BW. This article aims to investigate this limitation. Methods: To tackle the infinite BW issue, a band-limited Gaussian signal is employed for counting nodes, impacting the cross-correlation function (CCF) and the derived estimation parameters. To correlate the estimation parameters for finite and infinite BW scenarios, a scaling factor (SF) is determined for a specific BW by averaging their ratios across different node counts. Results: Error-free estimation in a band-limited condition is reported in this work if the SF for that BW is known. Given the typical undersea BW range of 1–15 kHz, it is also important to establish a relationship between the SF and BW. This relationship, derived and validated through simulation, allows for determining the SF and achieving accurate node count under any band-limited condition within the 1–15 kHz range. Furthermore, an evaluation of node counting performance in terms of a statistical parameter called the coefficient of variation (CV) is performed for finite BW scenarios. As a side contribution, the effect of noise on the CC-based undersea node counting approach is also explored.Conclusion: This research reveals that successful node counting can be achieved using the CC-based technique in the presence of finite undersea BW constraints.
Original Research Paper
Power Electronics
F. Sedaghati; S. Ebrahimzadeh; H. Dolati
Abstract
Background and Objectives: Increasing environmental problems and challenges have led to increased use of renewable energy sources such as photovoltaic or PV system. One of the attractive research fields is power electronic converters as interfaces for renewable energy sources. Multilevel inverters can ...
Read More
Background and Objectives: Increasing environmental problems and challenges have led to increased use of renewable energy sources such as photovoltaic or PV system. One of the attractive research fields is power electronic converters as interfaces for renewable energy sources. Multilevel inverters can operate as such interfaces. This paper introduces modified topologies of switched-capacitor multilevel inverters, designed to overcome constraints of low voltage renewable energy sources such as PV. Methods: Configuration of topologies utilize a single DC source with series or parallel connection of capacitors to produce 7-level, 9-level, and 11-level voltage in the converter load side. The paper presents the converter operation principle, elements voltage stress analysis, and capacitor sizing calculations. Also, operation analysis of suggested inverter topologies is validated using implemented set up. Results: Comprehensive comparative analysis reveals that the proposed topologies have merits and superior performance compared to existing solutions regarding component number, voltage boost factor, and voltage stress. The experimental measurement results confirm the accuracy of multilevel output voltage waveforms and the self-balancing of capacitor voltages, as predicted by theoretical analysis.Conclusion: The suggested switched-capacitor multilevel inverters, moreover the superiority over previously presented topologies, show great potential for application in photovoltaic systems and electric vehicle battery banks.