Original Research Paper
Artificial Intelligence
M. Soluki; Z. Askarinejadamiri; N. Zanjani
Abstract
Background and Objectives: This article explores a method for generating Persian texts using the GPT-2 language model and the Hazm library. Researchers and writers often require tools that can assist them in the writing process and even think on their behalf in various domains. By leveraging the GPT-2 ...
Read More
Background and Objectives: This article explores a method for generating Persian texts using the GPT-2 language model and the Hazm library. Researchers and writers often require tools that can assist them in the writing process and even think on their behalf in various domains. By leveraging the GPT-2 model, it becomes possible to generate acceptable and creative texts, which increases writing speed and efficiency, thus mitigating the high costs associated with article writing.Methods: In this research, the GPT-2 model is employed to generate and predict Persian texts. The Hazm library is utilized for natural language processing and automated text generation. The results of this study are evaluated using different datasets and output representations, demonstrating that employing the Hazm library with input data exceeding 1000 yields superior outcomes compared to other text generation methodsResults: Through extensive experimentation and analysis, the study demonstrates the effectiveness of this combination in generating coherent and contextually appropriate text in the Persian language. The results highlight the potential of leveraging advanced language models and linguistic processing tools for enhancing natural language generation tasks in Persian. The findings of this research contribute to the growing field of Persian language processing and provide valuable insights for researchers and practitioners working on text generation applications in similar languages.Conclusion: Overall, this study showcases the promising capabilities of the GPT-2 model and Hazm library in Persian text generation, underscoring their potential for future advancements in the field This research serves as a valuable guide and tool for generating Persian texts in the field of research and scientific writing, contributing to cost and time reduction in article writing
Original Research Paper
Social Networks
M. Sabzekar; S. Baradaran Nejad; M. Khazaeipoor
Abstract
Background and Objectives: Nowadays, social networks are recognized as significant sources of information exchange. Consequently, many organizations have chosen social networks as essential tools for marketing and brand management. Communities are essential structures that can enhance the performance ...
Read More
Background and Objectives: Nowadays, social networks are recognized as significant sources of information exchange. Consequently, many organizations have chosen social networks as essential tools for marketing and brand management. Communities are essential structures that can enhance the performance of social networks by grouping nodes and analyzing the information derived from them. This subject becomes more important with the increase in information volume and the complexity of relationships in networks. The goal of community identification is to find subgraphs that are densely connected internally but loosely connected externally.Methods: While community detection has mostly been studied in static networks in the past, this paper focuses on dynamic networks and the influence of central nodes in forming communities. In the proposed algorithm, the network is captured through multiple snapshots. The initial snapshot calculates the influence of each node. Then, by selecting k nodes with higher influence, network communities are formed, and other nodes belong to the community with the most common edges. In the second step, after receiving the next snapshot, communities are updated. Then, k nodes with higher influence are selected, and their associated community is created if needed. If the previous community centers are not among the newly selected k nodes, the community is dissolved, and the nodes within it belong to other communities.Results: Based on the results obtained, the proposed algorithm has managed to achieve better results in most cases compared to the compared algorithms, especially in terms of modularity metrics. The reason behind this success could be attributed to the utilization of influential nodes in community formation.Conclusion: Drawing from the outcomes attained, the suggested algorithm has effectively outperformed the contrasted algorithms in a majority of instances, particularly concerning metrics related to modularity. This accomplishment can potentially be ascribed to the incorporation of influential nodes during the process of community formation.
Original Research Paper
Computational Intelligence
A. Rouhi; E. Pira
Abstract
Background and Objectives: This paper explores the realm of optimization by synergistically integrating two unique metaheuristic algorithms: the Wild Horse Optimizer (WHO) and the Fireworks Algorithm (FWA). WHO, inspired by the behaviors of wild horses, demonstrates proficiency in global exploration, ...
Read More
Background and Objectives: This paper explores the realm of optimization by synergistically integrating two unique metaheuristic algorithms: the Wild Horse Optimizer (WHO) and the Fireworks Algorithm (FWA). WHO, inspired by the behaviors of wild horses, demonstrates proficiency in global exploration, while FWA emulates the dynamic behavior of fireworks, thereby enhancing local exploitation. The goal is to harness the complementary strengths of these algorithms, achieving a harmonious balance between exploration and exploitation to enhance overall optimization performance.Methods: The study introduces a novel hybrid metaheuristic algorithm, WHOFWA, detailing its design and implementation. Emphasis is placed on the algorithm's ability to balance exploration and exploitation. Extensive experiments, featuring a diverse set of benchmark optimization problems, including general test functions and those from CEC 2005, CEC 2019, and 2022, assess WHOFWA's effectiveness. Comparative analyses involve WHO, FWA, and other metaheuristic algorithms such as Reptile Search Algorithm (RSA), Prairie Dog Optimization (PDO), Fick’s Law Optimization (FLA), and Ladybug Beetle Optimization (LBO).Results: According to the Friedman and Wilcoxon signed-rank tests, for all selected test functions, WHOFWA outperforms WHO, FWA, RSA, PDO, FLA, and LBO by 42%, 55%, 74%, 71%, 48%, and 52%, respectively. Finally, the results derived from addressing real-world constrained optimization problems using the proposed algorithm demonstrate its superior performance when compared to several well-regarded algorithms documented in the literature.Conclusion: In conclusion, WHOFWA, the hybrid metaheuristic algorithm uniting WHO and FWA, emerges as a powerful optimization tool. Its unique ability to balance exploration and exploitation yields superior performance compared to WHO, FWA, and benchmark algorithms. The study underscores WHOFWA's potential in tackling complex optimization problems, making a valuable contribution to the realm of metaheuristic algorithms.
Original Research Paper
Electronics
A. Ebadiyan; A. Shokri; M. Amirmazlaghani; N. Darestani Farahani
Abstract
Background and Objectives: Semiconductor junction-based radioisotope detectors are commonly used in radioisotope batteries due to their small size and excellent performance. This study aims to design a betavoltaic battery based on a metal-porous semiconductor Schottky structure, comprising an N-type ...
Read More
Background and Objectives: Semiconductor junction-based radioisotope detectors are commonly used in radioisotope batteries due to their small size and excellent performance. This study aims to design a betavoltaic battery based on a metal-porous semiconductor Schottky structure, comprising an N-type zinc oxide (ZnO) semiconductor and platinum (Pt) metal. Methods: we utilized the TCAD-SILVACO 3D simulator to simulate the device, and a C-Interpreter code was applied to simulate the beta particle source, which was an electron beam with an average energy equivalent to 63Ni beta particles. The short circuit current, open-circuit voltage, fill factor (FF), and efficiency of the designed structure were calculated through simulation. Additionally, we discussed the theoretical justification based on the energy band structure. Results: The energy conversion efficiency of the proposed structure was calculated to be 11.37% when bulk ZnO was utilized in the Schottky junction. However, by creating pores and increasing the effective junction area, a conversion efficiency of 35.5% was achieved. The proposed structure exhibited a short-circuit current, open-circuit voltage, and fill factor (FF) of 37.5 nA, 1.237 V, and 76.5%, respectively.Conclusion: This study explored a betavoltaic device with a porous structure based on a Schottky junction between Pt and ZnO semiconductor. The creation of pores increased the contact surface area and effectively trapped beta beams, resulting in improved performance metrics such as efficiency, short circuit current, and open-circuit voltage.
Original Research Paper
Power Electronics
P. Hamedani; M. Changizian
Abstract
Background and Objectives: Model predictive control (MPC) is a practical and attractive control methodology for the control of power electronic converters and electrical motor drives. MPC has a simple structure and enables the simultaneous consideration of different objectives and constraints. However, ...
Read More
Background and Objectives: Model predictive control (MPC) is a practical and attractive control methodology for the control of power electronic converters and electrical motor drives. MPC has a simple structure and enables the simultaneous consideration of different objectives and constraints. However, when applying MPC for multilevel inverters (MLIs), especially at higher voltage levels, the number of switching states dramatically increases. This issue becomes more severe when MLIs are used to supply electrical motor drives.Methods: This paper proposes three different MPC strategies that reduce the number of iterations and computation burden in a 3-phase 4-level flying capacitor inverter (FCI). Traditional MPC with a reduced number of switching conditions, split MPC, and hybrid MPC-PWM control are investigated in this work.Results: In all methods, the capacitor voltages of the FCI are balanced during different operational conditions. The number of iterations is reduced from 512 in traditional MPC to at least 192 in the split MPC. Moreover, the split MPC strategy eliminates the usage and optimization of weighting factors for capacitors voltage balance. However, in the hybrid MPC-PWM control method in comparison to other methods, the voltage balancing time is much lower, the phase current tracks the reference more accurately, the transient time is lower, and the efficiency is higher. In addition, the capacitors voltage ripple is negligible in the hybrid MPC-PWM control method.Conclusion: Simulation results manifest the effectiveness of the suggested hybrid MPC-PWM methodology. Results manifest that the hybrid MPC-PWM control offers perfect dynamic characteristics and succeeds in maintaining the voltage balance during different operational conditions.
Original Research Paper
Machine Learning
A. Mohamadi; M. Habibi; F. Parandin
Abstract
Background and Objectives: Metastatic castration-sensitive prostate cancer (mCSPC) represents a critical juncture in the management of prostate cancer, where the accurate prediction of the onset of castration resistance is paramount for guiding treatment decisions.Methods: In this study, we underscore ...
Read More
Background and Objectives: Metastatic castration-sensitive prostate cancer (mCSPC) represents a critical juncture in the management of prostate cancer, where the accurate prediction of the onset of castration resistance is paramount for guiding treatment decisions.Methods: In this study, we underscore the power and efficiency of auto-ML models, specifically the Random Forest Classifier, for their low-code, user-friendly nature, making them a practical choice for complex tasks, to develop a predictive model for the occurrence of castration resistance events (CRE (. Utilizing a comprehensive dataset from MSK (Clin Cancer Res 2020), comprising clinical, genetic, and molecular features, we conducted a comprehensive analysis to discern patterns and correlations indicative of castration resistance. A random forest classifier was employed to harness the dataset's intrinsic interactions and construct a robust predictive model. Results: We used over 18 algorithms to find the best model, and our results showed a significant achievement, with the developed model demonstrating an impressive accuracy of 75% in predicting castration resistance events. Furthermore, the analysis highlights the importance of specific features such as 'Fraction Genome Altered ‘and the role of prostate specific antigen (PSA) in castration resistance prediction.Conclusion: Corroborating these findings, recent studies emphasize the correlation between high 'Fraction Genome Altered' and resistance and the predictive power of elevated PSA levels in castration resistance. This highlights the power of machine learning in improving outcome predictions vital for prostate cancer treatment. This study deepens our insights into metastatic castration-sensitive prostate cancer and provides a practical tool for clinicians to shape treatment strategies and potentially enhance patient results.
Original Research Paper
Bioinformatics
M. Akhavan-Safar; B. Teimourpour; M. Ayyoubi
Abstract
Background and Objectives: One of the important topics in oncology treatment and prevention is the identification of genes that initiate cancer in cells. These genes are known as cancer driver genes (CDGs). Identification of the CDGs is important both for a basic understanding of cancer and to help find ...
Read More
Background and Objectives: One of the important topics in oncology treatment and prevention is the identification of genes that initiate cancer in cells. These genes are known as cancer driver genes (CDGs). Identification of the CDGs is important both for a basic understanding of cancer and to help find new therapeutic or biomarker goals. Several computational methods to find the genes responsible for cancer have been developed based on genome data. However, many of these methods find key mutations in genomic data to predict which genes are responsible for cancer. These methods depend on the mutation and genome data and often show a high rate of false positives in the results. In this study, we proposed an influence maximization-based approach, CinfuMax, which can detect the genes responsible for cancer without needing information on mutations.Methods: In this method, the concept of influence maximization and the independent cascade model are employed. Firstly, the gene regulatory network for breast, lung and colon cancers was built using regulatory interactions and gene expression data. Next, we implemented an independent cascade diffusion algorithm on the networks to compute each gene's coverage. Finally, the genes with the highest coverage were classified as driver.Results: The results of the proposed method were compared to 19 other computational and network-based methods based on the F-measure and the number of detected driver genes. The results demonstrated that the proposed method produces better results than other methods. Also, CinfuMax is able to detect 18, 19 and 22 individual driver genes in three breast, lung and colon cancers, respectively, which have not been identified in any of the previous methods.Conclusion: The results show that independent cascading methods to identify driver genes perform better than linear threshold methods. Driver genes are also classified in terms of influence speed and have identified the genes with the highest diffusion rate in each type of cancer. Identification of these genes can be useful for molecular therapies and drug purposes.
Original Research Paper
Artificial Intelligence
M. Amoozegar; S. Golestani
Abstract
Background and Objectives: In recent years, various metaheuristic algorithms have become increasingly popular due to their effectiveness in solving complex optimization problems across diverse domains. These algorithms are now being utilized for an ever-expanding number of real-world applications across ...
Read More
Background and Objectives: In recent years, various metaheuristic algorithms have become increasingly popular due to their effectiveness in solving complex optimization problems across diverse domains. These algorithms are now being utilized for an ever-expanding number of real-world applications across many fields. However, there are two critical factors that can significantly impact the performance and optimization capability of metaheuristic algorithms. First, comprehensively understanding the intrinsic behavior of the algorithms can provide key insights to improve their efficiency. Second, proper calibration and tuning of an algorithm's parameters can dramatically enhance its optimization effectiveness. Methods: In this study, we propose a novel response surface methodology-based approach to thoroughly analyze and elucidate the behavioral dynamics of optimization algorithms. This technique constructs an informative empirical model to determine the relative importance and interaction effects of an algorithm's parameters. Although applied to investigate the Gravitational Search Algorithm, this systematic methodology can serve as a generally applicable strategy to gain quantitative and visual insights into the functionality of any metaheuristic algorithm.Results: Extensive evaluation using 23 complex benchmark test functions exhibited that the proposed technique can successfully identify ideal parameter values and their comparative significance and interdependencies, enabling superior comprehension of an algorithm's mechanics.Conclusion: The presented modeling and analysis framework leverages multifaceted statistical and visualization tools to uncover the inner workings of algorithm behavior for more targeted calibration, thereby enhancing the optimization performance. It provides an impactful approach to elucidate how parameter settings shape algorithm searche so they can be calibrated for optimal efficiency.
Original Research Paper
Computer Vision
R. Iranpoor; S. H. Zahiri
Abstract
Background and Objectives: Re-identifying individuals due to its capability to match a person across non-overlapping cameras is a significant application in computer vision. However, it presents a challenging task because of the large number of pedestrians with various poses and appearances appearing ...
Read More
Background and Objectives: Re-identifying individuals due to its capability to match a person across non-overlapping cameras is a significant application in computer vision. However, it presents a challenging task because of the large number of pedestrians with various poses and appearances appearing at different camera viewpoints. Consequently, various learning approaches have been employed to overcome these challenges. The use of methods that can strike an appropriate balance between speed and accuracy is also a key consideration in this research.Methods: Since one of the key challenges is reducing computational costs, the initial focus is on evaluating various methods. Subsequently, improvements to these methods have been made by adding components to networks that have low computational costs. The most significant of these modifications is the addition of an Image Re-Retrieval Layer (IRL) to the Backbone network to investigate changes in accuracy. Results: Given that increasing computational speed is a fundamental goal of this work, the use of MobileNetV2 architecture as the Backbone network has been considered. The IRL block has been designed for minimal impact on computational speed. By examining this component, specifically for the CUHK03 dataset, there was a 5% increase in mAP and a 3% increase in @Rank1. For the Market-1501 dataset, the improvement is partially evident. Comparisons with more complex architectures have shown a significant increase in computational speed in these methods.Conclusion: Reducing computational costs while increasing relative recognition accuracy are interdependent objectives. Depending on the specific context and priorities, one might emphasize one over the other when selecting an appropriate method. The changes applied in this research can lead to more optimal results in method selection, striking a balance between computational efficiency and recognition accuracy.
Original Research Paper
Artificial Intelligence
S. Nemati
Abstract
Background and Objectives: Community question-answering (CQA) websites have become increasingly popular as platforms for individuals to seek and share knowledge. Identifying users with a special shape of expertise on CQA websites is a beneficial task for both companies and individuals. Specifically, ...
Read More
Background and Objectives: Community question-answering (CQA) websites have become increasingly popular as platforms for individuals to seek and share knowledge. Identifying users with a special shape of expertise on CQA websites is a beneficial task for both companies and individuals. Specifically, finding those who have a general understanding of certain areas but lack expertise in other fields is crucial for companies who are planning internship programs. These users, called dash-shaped users, are willing to work for low wages and have the potential to quickly develop into skilled professionals, thus minimizing the risk of unsuccessful recruitment. Due to the vast number of users on CQA websites, they provide valuable resources for finding individuals with various levels of expertise. This study is the first of its kind to directly classify CQA users based solely on the textual content of their posts. Methods: To achieve this objective, we propose an ensemble of advanced deep learning algorithms and traditional machine learning methods for the binary classification of CQA users into two categories: those with dash-shaped expertise and those without. In the proposed method, we used the stack generalization to fuse the results of the dep and machine learning methods. To evaluate the effectiveness of our approach, we conducted an extensive experiment on three large datasets focused on Android, C#, and Java topics extracted from the Stack Overflow website. Results: The results on four datasets of the Stack Overflow, demonstrate that our ensemble method not only outperforms baseline methods including seven traditional machine learning and six deep models, but it achieves higher performance than state-of-the-art deep models by an average of 10% accuracy and F1-measure. Conclusion: The proposed model showed promising results in confirming that by using only their textual content of questions, we can classify the users in CQA websites. Specifically, the results showed that using the contextual content of the questions, the proposed model can be used for detecting the dash-shaped users precisely. Moreover, the proposed model is not limited to detecting dash-shaped users. It can also classify other shapes of expertise, such as T- and C-shaped users, which are valuable for forming agile software teams. Additionally, our model can be used as a filter method for downstream applications, like intern recommendations.
Original Research Paper
Object Recognition
E. Ghasemi Bideskan; S.M. Razavi; S. Mohamadzadeh; M. Taghippour
Abstract
Background and Objectives: The recognition of facial expressions using metaheuristic algorithms is a research topic in the field of computer vision. This article presents an approach to identify facial expressions using an optimized filter developed by metaheuristic algorithms. Methods: The entire process ...
Read More
Background and Objectives: The recognition of facial expressions using metaheuristic algorithms is a research topic in the field of computer vision. This article presents an approach to identify facial expressions using an optimized filter developed by metaheuristic algorithms. Methods: The entire process of feature extraction hinges on using a filter optimally configured by metaheuristic algorithms. Essentially, the purpose of utilizing this metaheuristic algorithm is to determine the optimal weights for feature extraction filters. Once the optimal weights for the filter have been determined by the metaheuristic algorithm, optimal filter sizes have also been determined. As an initial step, the k-nearest neighbor classifier is employed due to its simplicity and high accuracy. Following the initial stage, a final model is presented, which integrates results from both filterbank and Multilayer Perceptron neural networks.Results: An analysis of the existing instances in the FER2013 database has been conducted using the method proposed in this article. This model achieved a recognition rate of 78%, which is superior to other algorithms and methods while requiring less training time than other algorithms and methods.In addition, the JAFFE database, a Japanese women's database, was utilized for validation. On this dataset, the proposed approach achieved a 94.88% accuracy rate, outperforming other competitors.Conclusion: The purpose of this article is to propose a method for improving facial expression recognition by using an optimized filter, which is implemented through a metaheuristic algorithm based on the KA. In this approach, optimized filters were extracted using the metaheuristic algorithms kidney, k-nearest neighbor, and multilayer perceptron. Additionally, by employing this approach, the optimal size and number of filters for facial state recognition were determined in order to achieve the highest level of accuracy in the extraction process.
Original Research Paper
Power Electronics
P. Hamedani; S. S. Fazel; M. Shahbazi
Abstract
Background and Objectives: Modeling and simulation of electric railway networks is an important issue due to their non-linear and variant nature. This problem becomes more serious with the enormous growth in public transportation tracks and the number of moving trains. Therefore, the main aim of this ...
Read More
Background and Objectives: Modeling and simulation of electric railway networks is an important issue due to their non-linear and variant nature. This problem becomes more serious with the enormous growth in public transportation tracks and the number of moving trains. Therefore, the main aim of this paper is to present a simple and applicable simulation method for DC electric railway systems.Methods: A train movement simulator in a DC electric railway line is developed using Matlab software. A case study based on the practical parameters of Isfahan Metro Line 1 is performed. The simulator includes the train mechanical movement model and power supply system model. Regenerative braking and driving control modes with coasting control are applied in the simulation.Results: The simulation results of the power network are presented for a single train traveling in both up and down directions. Results manifest the correctness and simplicity of the suggested method which facilitates the investigation of the DC electric railway networks.Conclusion: According to the results, the train current is consistent with the electric power demand of the train. But the pantograph voltage has an opposite relationship with its electric power demand. In braking times, the excess power of the train is injected into the electrical network, and thus, overvoltage and undervoltage occur in the overhead contact line and the substation busbar. Therefore, at the maximum braking power of the train, the pantograph voltage reaches its maximum. The highest amount of fluctuation is related to the substation that is closest to the train. As the train moves away from the traction substations, the voltage fluctuations decrease and vice versa.
Review paper
Communications Networks
M. Hosseini Shirvani; A. Akbarifar
Abstract
Background and Objectives: Wireless sensor networks (WSNs) are ad-hoc technologies that have various applications in different industries such as in healthcare systems, environment and military surveillance, manufacturing, and IoT context in general. Expanding the scope of sensor network applications ...
Read More
Background and Objectives: Wireless sensor networks (WSNs) are ad-hoc technologies that have various applications in different industries such as in healthcare systems, environment and military surveillance, manufacturing, and IoT context in general. Expanding the scope of sensor network applications has led researchers to develop solutions to provide sustainable communications and networks for distributed environments, as well as how to secure these methods with limited resources.Methods: The lack of infrastructure space and the vulnerable nature of these networks make it difficult to design security models and algorithms for them. So, to run the sensor network in safe mode, any type of attack must be detected before any security breach is materialized. According to the importance of the network and also the nature of the sensor networks along with the critical challenge of energy consumption, solutions and defensive lines such as intrusion prevention and intrusion detection systems will be selected.Results: This paper surveys subjectively the intrusion and anomaly detection system in WSNs to determine potentials and challenges for further processing. Therefore, designing an efficient and optimal intrusion detection solution applicable to wireless sensor networks, IoT, and other ad-hoc networks has been a major challenge that will help the researcher to design or choose the best approach for their future research.Conclusion: This research also pave the way of interested researchers to find existing challenges and shortcomings for further processing.
Original Research Paper
Electronics
Z. Ahangari
Abstract
Background and Objectives: In this study, a reconfigurable field-effect transistor has been developed utilizing a multi-doped source-drain region, enabling operation in both n-mode and p-mode through a simple adjustment of electrode bias. In contrast to traditional reconfigurable transistors that rely ...
Read More
Background and Objectives: In this study, a reconfigurable field-effect transistor has been developed utilizing a multi-doped source-drain region, enabling operation in both n-mode and p-mode through a simple adjustment of electrode bias. In contrast to traditional reconfigurable transistors that rely on Schottky barrier source/drain with identical Schottky barrier height, the suggested device utilizes a straightforward fabrication process that involves physically multi-doped source and drain. The proposed structure incorporates a bilayer of n+ and p+ in the source and drain regions.Methods: The device simulator Silvaco (ATLAS) is utilized to conduct the numerical simulations.Results: The transistor exhibits consistent transfer characteristics in both modes of operation. The influence of key design parameters on device performance has been analyzed. A notable aspect of this transistor is the integration of an XNOR logic gate within a single device, rendering it suitable for high-performance computing circuits. The findings indicate that on-state currents of 142 µA/µm and 57.2 µA/µm, along with on/off current ratio of 8.68×107 and 3.5×107, have been attained for n-mode and p-mode operation, respectively.Conclusion: A single-transistor XNOR gate design offers potential advantages for future computing circuits due to its simplicity and reduced component count, which could lead to smaller, more energy-efficient, and potentially faster computing systems. This innovation may pave the way for advancements in low-power and high-density electronic devices.
Original Research Paper
Machine Learning
M. Moosakhani; A. Jahangard-Rafsanjani; S. Zarifzadeh
Abstract
Background and Objectives: Investment has become a paramount concern for various individuals, particularly investors, in today's financial landscape. Cryptocurrencies, encompassing various types, hold a unique position among investors, with Bitcoin being the most prominent. Additionally, Bitcoin serves ...
Read More
Background and Objectives: Investment has become a paramount concern for various individuals, particularly investors, in today's financial landscape. Cryptocurrencies, encompassing various types, hold a unique position among investors, with Bitcoin being the most prominent. Additionally, Bitcoin serves as the foundation for some other cryptocurrencies. Given the critical nature of investment decisions, diverse methods have been employed, ranging from traditional statistical approaches to machine learning and deep learning techniques. However, among these methods, the Generative Adversarial Network (GAN) model has not been utilized in the cryptocurrency market. This article aims to explore the applicability of the GAN model for predicting short-term Bitcoin prices.Methods: In this article, we employ the GAN model to predict short-term Bitcoin prices. Moreover, Data for this study has been collected from a diverse set of sources, including technical data, fundamental data, technical indicators, as well as additional data such as the number of tweets and Google Trends. In this research, we also evaluate the model's accuracy using the RMSE, MAE and MAPE metrics.Results: The results obtained from the experiments indicate that the GAN model can be effectively utilized in the cryptocurrency market for short-term price prediction.Conclusion: In conclusion, the results of this study suggest that the GAN model exhibits promise in predicting short-term prices in the cryptocurrency market, affirming its potential utility within this domain. These insights can provide investors and analysts with enhanced knowledge for making more informed investment decisions, while also paving the way for comparative analyses against alternative models operating in this dynamic field.
Original Research Paper
Image Annotation and Retrieval
A. Gheitasi; H. Farsi; S. Mohamadzadeh
Abstract
Background and Objectives: Freehand sketching is an easy-to-use but effective instrument for computer-human connection. Sketches are highly abstract to the domain gap, that exists between the intended sketch and real image. In addition to appearance information, it is believed that shape information ...
Read More
Background and Objectives: Freehand sketching is an easy-to-use but effective instrument for computer-human connection. Sketches are highly abstract to the domain gap, that exists between the intended sketch and real image. In addition to appearance information, it is believed that shape information is also very efficient in sketch recognition and retrieval. Methods: In the realm of machine vision, comprehending Freehand Sketches has grown more crucial due to the widespread use of touchscreen devices. In addition to appearance information, it is believed that shape information is also very efficient in sketch recognition and retrieval. The majority of sketch recognition and retrieval methods utilize appearance information-based tactics. A hybrid network architecture comprising two networks—S-Net (Sketch Network) and A-Net (Appearance Network)—is shown in this article under the heading of hybrid convolution. These subnetworks, in turn, describe appearance and shape information. Conversely, a module known as the Conventional Correlation Analysis (CCA) technique module is utilized to match the range and enhance the sketch retrieval performance to decrease the range gap distance. Finally, sketch retrieval using the hybrid Convolutional Neural Network (CNN) and CCA domain adaptation module is tested using many datasets, including Sketchy, Tu-Berlin, and Flickr-15k. The final experimental results demonstrated that compared to more sophisticated methods, the hybrid CNN and CCA module produced high accuracy and results.Results: The proposed method has been evaluated in the two fields of image classification and Sketch Based Image Retrieval (SBIR). The proposed hybrid convolution works better than other basic networks. It achieves a classification score of 84.44% for the TU-Berlin dataset and 82.76% for the sketchy dataset. Additionally, in SBIR, the proposed method stands out among methods based on deep learning, outperforming non-deep methods by a significant margin. Conclusion: This research presented the hybrid convolutional framework, which is based on deep learning for pattern recognition. Compared to the best available methods, hybrid network convolution has increased recognition and retrieval accuracy by around 5%. It is an efficient and thorough method which demonstrated valid results in Sketch-based image classification and retrieval on TU-Berlin, Flickr 15k, and sketchy datasets.
Original Research Paper
Control of Biological Systems
Z. Ghassemi Zahan; S. Ozgoli; S. Bolouki
Abstract
Background and Objectives: In genetic network control, RC-Centrality is introduced as a new control centrality measure to address the control of linear time-invariant networks. The objective of this study is to propose an optimal control centrality metric that quantifies the centrality of individual ...
Read More
Background and Objectives: In genetic network control, RC-Centrality is introduced as a new control centrality measure to address the control of linear time-invariant networks. The objective of this study is to propose an optimal control centrality metric that quantifies the centrality of individual nodes or groups of nodes within a network. Specifically, RC-Centrality identifies key nodes or node groups that can act as controllers, such as genes regulating the gene expression process. To assess the effectiveness of this method, RC-Centrality is compared with standard centralities in a real genetic network. Additionally, the research delves into the role of uncertainty structure in altering the priority order of RC-Centrality.Methods: The RC-Centrality measure is introduced based on an optimal control problem to address weighted, directed, and signed networks. Robust controllers are designed to ensure Lyapunov stability under uncertainty. A cost function is introduced to measure the performance metric represented by input energy in the presence of uncertainty.Results: The study presents RC-Centrality as an effective measure for identifying key nodes in genetic networks suitable for control. In-silico simulations are conducted to evaluate its performance in comparison to standard centralities. The research highlights the impact of uncertainty structure on the priority of RC-Centrality.Conclusion: RC-Centrality offers a promising approach to identify essential nodes in genetic networks for control purposes. Its performance is demonstrated through simulations, and the study emphasizes the influence of uncertainty structure on the centrality measure's prioritization. This research has implications for understanding and controlling genetic networks, particularly in the presence of uncertainty.
Original Research Paper
Computer Vision
N. Rahimpour; A. Azadbakht; M. Tahmasbi; H. Farahani; S.R. Kheradpishe; A. Javaheri
Abstract
Background and Objectives: Cadastral boundary detection deals with locating the boundary of the ownership and use of land. Recently, there has been high demand for accelerating and improving the automatic detection of cadastral mapping. As this problem is in its starting point, there are few researches ...
Read More
Background and Objectives: Cadastral boundary detection deals with locating the boundary of the ownership and use of land. Recently, there has been high demand for accelerating and improving the automatic detection of cadastral mapping. As this problem is in its starting point, there are few researches using deep learning algorithms. Methods: In this paper, we develop an algorithm with a Mask R-CNN core followed with geometric post-processing methods that improve the quality of the output. Many researches use classification or semantic segmentation but our algorithm employs instance segmentation. Our algorithm includes two parts, each of which consists of a few phases. In the first part, we use Mask R-CNN with the backbone of a pre-trained ResNet-50 on the ImageNet dataset. In the second part, we apply three geometric post-processing methods to the output of the first part to get better overall output. Here, we also use computational geometry to introduce a new method for simplifying lines which we call pocket-based simplification algorithm.Results: We used 3 google map images with sizes 4963 × 2819, 3999 × 3999, and 5520 × 3776 pixels. And divide them to overlapping and non-overlapping 400×400 patches used for training the algorithm. Then we tested it on a google map image from Famenin region in Iran. To evaluate the performance of our algorithm, we use popular metrics Recall, Precision, and F-score. The highest Recall is 95%, which also maintains a high precision of 72%. This results in an F-score of 82%.Conclusion: The idea of semantic segmentation to derive boundary of regions, is new. We used Mask R-CNN as the core of our algorithm, that is known as a very suitable tools for semantic segmentation. Our algorithm performs geometric post-process improves the f-score by almost 10 percent. The scores for a region in Iran containing many small farms is very good.
Original Research Paper
Meta-heuristic Algorithms
E. Pira; Alireza Rouhi
Abstract
Background and Objectives: The development of effective meta-heuristic algorithms is crucial for solving complex optimization problems. This paper introduces the Society Deciling Process (SDP), a novel socio-inspired meta-heuristic algorithm that simulates the social categorization into deciles based ...
Read More
Background and Objectives: The development of effective meta-heuristic algorithms is crucial for solving complex optimization problems. This paper introduces the Society Deciling Process (SDP), a novel socio-inspired meta-heuristic algorithm that simulates the social categorization into deciles based on metrics such as income, occupation, and education. The objective of this research is to introduce the SDP algorithm and evaluate its performance in terms of convergence speed and hit rate, comparing it with seven well-established meta-heuristic algorithms to highlight its potential in optimization tasks.Methods: The SDP algorithm's efficacy was evaluated using a comprehensive set of 14 general test functions, including benchmarks from the CEC 2019 and CEC 2022 competitions. The performance of SDP was compared against seven established meta-heuristic algorithms: Artificial Hummingbird Algorithm (AHA), Dwarf Mongoose Optimization algorithm (DMO), Reptile Search Algorithm (RSA), Snake Optimizer (SO), Prairie Dog Optimization (PDO), Fick’s Law Optimization (FLA), and Gazelle Optimization Algorithm (GOA). Statistical analysis was conducted using Friedman's rank and Wilcoxon signed-rank tests to assess the relative performance in terms of exploration, exploitation capabilities, and proximity to the optimum solution.Results: The results demonstrated that the SDP algorithm outperforms its counterparts in terms of convergence speed and hit rate across the selected test functions. In statistical tests, SDP showed significantly better performance in exploration and exploitation, leading to a higher proximity to the optimum solution compared to the other algorithms. Furthermore, when applied to five complex engineering design problems, the SDP algorithm exhibited superior performance, outmatching the state-of-the-art algorithms in terms of effectiveness and efficiency.Conclusion: The Society Deciling Process (SDP) algorithm introduces a novel and effective approach to optimization, inspired by societal structure dynamics. Its superior performance in convergence speed, exploration and exploitation capabilities, and application to complex engineering problems establishes SDP as a promising meta-heuristic algorithm. This research not only demonstrates the potential of socio-inspired algorithms in optimization tasks but also opens avenues for further enhancements in meta-heuristic algorithm designs.
Original Research Paper
Power
S. Abbasi; D. Nazarpour; S. Golshannavaz
Abstract
Background and Objectives: Distributed generations (DGs) based on renewable energy, such as PV units, are becoming more prevalent in distribution networks due to technical and environmental benefits. However, the intermittency and uncertainty of these sources lead to technical and operational challenges. ...
Read More
Background and Objectives: Distributed generations (DGs) based on renewable energy, such as PV units, are becoming more prevalent in distribution networks due to technical and environmental benefits. However, the intermittency and uncertainty of these sources lead to technical and operational challenges. Energy storage application, uncertainty analysis, and network reconfiguration are apt therapies to resist these challenges. Methods: Energy management of modern, smart, and renewable-penetrated distribution networks is tailored here considering the uncertainties correlations. Network operation costs including switching operations, the expected energy not served (EENS) index as the reliability objective, and the node voltage deviation suppression as the technical objective are mathematically modeled. Multi-objective particle swarm optimization (MOPSO) is considered as the optimization engine. Scenario generation method and Nataf transformation are used in probabilistic evaluations of the problem. Moreover, the technique for Order Preference by Similarity to the Ideal Solution (TOPSIS) is deployed to make a final balance between different objectives to yield a unified solution.Results: To show the effectiveness of the proposed approach, the IEEE 33-node distribution network is put under extensive simulations. Different cases are simulated and interrogated to assess the performance of the proposed model.Conclusion: For different objectives dealing with different aspects of the network, remarkable achievements are attained. In brief, the final solution shows 4.50% decrease in operation cost, 13.07% improvement in reliability index, and 18.85% reduction in voltage deviation compared to the initial conditions.