Machine Learning
K. Gorgani Firouzjah; J. Ghasemi
Abstract
Background and Objectives: Power transformer (PT) health assessment is crucial for ensuring the reliability of power systems. Dissolved Gas Analysis (DGA) is a widely used technique for this purpose, but traditional DGA interpretation methods have limitations. This study aims to develop a more accurate ...
Read More
Background and Objectives: Power transformer (PT) health assessment is crucial for ensuring the reliability of power systems. Dissolved Gas Analysis (DGA) is a widely used technique for this purpose, but traditional DGA interpretation methods have limitations. This study aims to develop a more accurate and reliable PT health assessment method using an ensemble learning approach with DGA.Methods: The proposed method utilizes 11 key parameters obtained from real PT samples. In this way, synthetic data are generated using statistical simulation to enhance the model's robustness. Twelve different classifiers are initially trained and evaluated on the combined dataset. Two novel indices (a risk index and an unnecessary cost index) are introduced to assess the classifiers' performance alongside traditional metrics such as accuracy, precision, and the confusion matrix. An ensemble learning method is then constructed by selecting classifiers with the lowest risk and cost indices.Results: The ensemble learning approach demonstrated superior performance compared to individual classifiers. The learning algorithm achieved high accuracy (99%, 92%, and 86% for three health classes), a low unnecessary cost index (6%), and a low misclassification risk (16%). This result indicates the effectiveness of the ensemble approach in accurately detecting PT health conditions.Conclusion: The proposed ensemble learning method provides a reliable and accurate assessment of PT health using DGA data. This approach effectively optimizes maintenance strategies and enhances the overall reliability of power systems by minimizing misclassification risks and unnecessary costs.
Machine Learning
A. Ahmadi; R. Mahboobi Esfanjani
Abstract
Background and Objectives: A predefined structure is usually employed for deep neural networks, which results in over- or underfitting, heavy processing load, and storage overhead. Training along with pruning can decrease redundancy in deep neural networks; however, it may lead to a decrease in accuracy.Methods: ...
Read More
Background and Objectives: A predefined structure is usually employed for deep neural networks, which results in over- or underfitting, heavy processing load, and storage overhead. Training along with pruning can decrease redundancy in deep neural networks; however, it may lead to a decrease in accuracy.Methods: In this note, we provide a novel approach for structure optimization of deep neural networks based on competition of connections merged with brain-inspired synaptic pruning. The efficiency of each network connection is continuously assessed in the proposed scheme based on the global gradient magnitude criterion, which also considers positive scores for strong and more effective connections and negative scores for weak connections. But a connection with a weak score is not removed quickly; instead, it is eliminated when its net score reaches a predetermined threshold. Moreover, the pruning rate is obtained distinctly for each layer of the network.Results: Applying the suggested algorithm to a neural network model of a distillation column in a noisy environment demonstrates its effectiveness and applicability.Conclusion: The proposed method, which is inspired by connection competition and synaptic pruning in the human brain, enhances learning speed, preserves accuracy, and reduces costs due to its smaller network size. It also handles noisy data more efficiently by continuously assessing network connections
Machine Learning
S. Khonsha; M. A. Sarram; R. Sheikhpour
Abstract
Background and Objectives: Stock recommender system (SRS) based on deep reinforcement learning (DRL) has garnered significant attention within the financial research community. A robust DRL agent aims to consistently allocate some amount of cash to the combination of high-risk and low-risk ...
Read More
Background and Objectives: Stock recommender system (SRS) based on deep reinforcement learning (DRL) has garnered significant attention within the financial research community. A robust DRL agent aims to consistently allocate some amount of cash to the combination of high-risk and low-risk stocks with the ultimate objective of maximizing returns and balancing risk. However, existing DRL-based SRSs focus on one or, at most, two sequential trading agents that operate within the same or shared environment, and often make mistakes in volatile or variable market conditions. In this paper, a robust Concurrent Multiagent Deep Reinforcement Learning-based Stock Recommender System (CMSRS) is proposed.Methods: The proposed system introduces a multi-layered architecture that includes feature extraction at the data layer to construct multiple trading environments, so that different feed DRL agents would robustly recommend assets for trading layer. The proposed CMSRS uses a variety of data sources, including Google stock trends, fundamental data and technical indicators along with historical price data, for the selection and recommendation suitable stocks to buy or sell concurrently by multiple agents. To optimize hyperparameters during the validation phase, we employ Sharpe ratio as a risk adjusted return measure. Additionally, we address liquidity requirements by defining a precise reward function that dynamically manages cash reserves. We also penalize the model for failing to maintain a reserve of cash.Results: The empirical results on the real U.S. stock market data show the superiority of our CMSRS, especially in volatile markets and out-of-sample data.Conclusion: The proposed CMSRS demonstrates significant advancements in stock recommendation by effectively leveraging multiple trading agents and diverse data sources. The empirical results underscore its robustness and superior performance, particularly in volatile market conditions. This multi-layered approach not only optimizes returns but also efficiently manages risks and liquidity, offering a compelling solution for dynamic and uncertain financial environments. Future work could further refine the model's adaptability to other market conditions and explore its applicability across different asset classes.
Machine Learning
A. Mohamadi; M. Habibi; F. Parandin
Abstract
Background and Objectives: Metastatic castration-sensitive prostate cancer (mCSPC) represents a critical juncture in the management of prostate cancer, where the accurate prediction of the onset of castration resistance is paramount for guiding treatment decisions.Methods: In this study, we underscore ...
Read More
Background and Objectives: Metastatic castration-sensitive prostate cancer (mCSPC) represents a critical juncture in the management of prostate cancer, where the accurate prediction of the onset of castration resistance is paramount for guiding treatment decisions.Methods: In this study, we underscore the power and efficiency of auto-ML models, specifically the Random Forest Classifier, for their low-code, user-friendly nature, making them a practical choice for complex tasks, to develop a predictive model for the occurrence of castration resistance events (CRE (. Utilizing a comprehensive dataset from MSK (Clin Cancer Res 2020), comprising clinical, genetic, and molecular features, we conducted a comprehensive analysis to discern patterns and correlations indicative of castration resistance. A random forest classifier was employed to harness the dataset's intrinsic interactions and construct a robust predictive model. Results: We used over 18 algorithms to find the best model, and our results showed a significant achievement, with the developed model demonstrating an impressive accuracy of 75% in predicting castration resistance events. Furthermore, the analysis highlights the importance of specific features such as 'Fraction Genome Altered ‘and the role of prostate specific antigen (PSA) in castration resistance prediction.Conclusion: Corroborating these findings, recent studies emphasize the correlation between high 'Fraction Genome Altered' and resistance and the predictive power of elevated PSA levels in castration resistance. This highlights the power of machine learning in improving outcome predictions vital for prostate cancer treatment. This study deepens our insights into metastatic castration-sensitive prostate cancer and provides a practical tool for clinicians to shape treatment strategies and potentially enhance patient results.
Machine Learning
M. Moosakhani; A. Jahangard-Rafsanjani; S. Zarifzadeh
Abstract
Background and Objectives: Investment has become a paramount concern for various individuals, particularly investors, in today's financial landscape. Cryptocurrencies, encompassing various types, hold a unique position among investors, with Bitcoin being the most prominent. Additionally, Bitcoin serves ...
Read More
Background and Objectives: Investment has become a paramount concern for various individuals, particularly investors, in today's financial landscape. Cryptocurrencies, encompassing various types, hold a unique position among investors, with Bitcoin being the most prominent. Additionally, Bitcoin serves as the foundation for some other cryptocurrencies. Given the critical nature of investment decisions, diverse methods have been employed, ranging from traditional statistical approaches to machine learning and deep learning techniques. However, among these methods, the Generative Adversarial Network (GAN) model has not been utilized in the cryptocurrency market. This article aims to explore the applicability of the GAN model for predicting short-term Bitcoin prices.Methods: In this article, we employ the GAN model to predict short-term Bitcoin prices. Moreover, Data for this study has been collected from a diverse set of sources, including technical data, fundamental data, technical indicators, as well as additional data such as the number of tweets and Google Trends. In this research, we also evaluate the model's accuracy using the RMSE, MAE and MAPE metrics.Results: The results obtained from the experiments indicate that the GAN model can be effectively utilized in the cryptocurrency market for short-term price prediction.Conclusion: In conclusion, the results of this study suggest that the GAN model exhibits promise in predicting short-term prices in the cryptocurrency market, affirming its potential utility within this domain. These insights can provide investors and analysts with enhanced knowledge for making more informed investment decisions, while also paving the way for comparative analyses against alternative models operating in this dynamic field.
Machine Learning
E. Shamsinejad; T. Banirostam; M. M. Pedram; A. M. Rahmani
Abstract
Background and Objectives: Nowadays, with the rapid growth of social networks extracting valuable information from voluminous sources of social networks, alongside privacy protection and preventing the disclosure of unique data, is among the most challenging objects. In this paper, a model for maintaining ...
Read More
Background and Objectives: Nowadays, with the rapid growth of social networks extracting valuable information from voluminous sources of social networks, alongside privacy protection and preventing the disclosure of unique data, is among the most challenging objects. In this paper, a model for maintaining privacy in big data is presented. Methods: The proposed model is implemented with Spark in-memory tool in big data in four steps. The first step is to enter the raw data from HDFS to RDDs. The second step is to determine m clusters and cluster heads. The third step is to parallelly put the produced tuples in separate RDDs. the fourth step is to release the anonymized clusters. The suggested model is based on a K-means clustering algorithm and is located in the Spark framework. also, the proposed model uses the capacities of RDD and Mlib components. Determining the optimized cluster heads in each tuple's content, considering data type, and using the formula of the suggested solution, leads to the release of data in the optimized cluster with the lowest rate of data loss and identity disclosure. Results: Using Spark framework Factors and Optimized Clusters in the K-means Algorithm in the proposed model, the algorithm implementation time in different megabyte intervals relies on multiple expiration time and purposeful elimination of clusters, data loss rates based on two-level clustering. According to the results of the simulations, while the volume of data increases, the rate of data loss decreases compared to FADS and FAST clustering algorithms, which is due to the increase of records in the proposed model. with the formula presented in the proposed model, how to determine the multiple selected attributes is reduced. According to the presented results and 2-anonomity, the value of the cost factor at k=9 will be at its lowest value of 0.20.Conclusion: The proposed model provides the right balance for high-speed process execution, minimizing data loss and minimal data disclosure. Also, the mentioned model presents a parallel algorithm for increasing the efficiency in anonymizing data streams and, simultaneously, decreasing the information loss rate.
Machine Learning
H. Nunoo-Mensah; S. Wewoliamo Kuseh; J. Yankey; F. A. Acheampong
Abstract
Background and Objectives: To a large extent, low production of maize can be attributed to diseases and pests. Accurate, fast, and early detection of maize plant disease is critical for efficient maize production. Early detection of a disease enables growers, breeders and researchers to effectively apply ...
Read More
Background and Objectives: To a large extent, low production of maize can be attributed to diseases and pests. Accurate, fast, and early detection of maize plant disease is critical for efficient maize production. Early detection of a disease enables growers, breeders and researchers to effectively apply the appropriate controlled measures to mitigate the disease’s effects. Unfortunately, the lack of expertise in this area and the cost involved often result in an incorrect diagnosis of maize plant diseases which can cause significant economic loss. Over the years, there have been many techniques that have been developed for the detection of plant diseases. In recent years, computer-aided methods, especially Machine learning (ML) techniques combined with crop images (image-based phenotyping), have become dominant for plant disease detection. Deep learning techniques (DL) have demonstrated high accuracies of performing complex cognitive tasks like humans among machine learning approaches. This paper aims at presenting a comprehensive review of state-of-the-art DL techniques used for detecting disease in the leaves of maize.Methods: In achieving the aims of this paper, we divided the methodology into two main sections; Article Selection and Detailed review of selected articles. An algorithm was used in selecting the state-of-the-art DL techniques for maize disease detection spanning from 2016 to 2021. Each selected article is then reviewed in detail taking into considerations the DL technique, dataset used, strengths and limitations of each technique. Results: DL techniques have demonstrated high accuracies in maize disease detection. It was revealed that transfer learning reduces training time and improves the accuracies of models. Models trained with images taking from a controlled environment (single leaves) perform poorly when deployed in the field where there are several leaves. Two-stage object detection models show superior performance when deployed in the field. Conclusion: From the results, lack of experts to annotate accurately, Model architecture, hyperparameter tuning, and training resources are some of the challenges facing maize leaf disease detection. DL techniques based on two-stage object detection algorithms are best suited for several plant leaves and complex backgrounds images.