Showing 22 results for Machine Learning
Volume 0, Issue 0 (3-2023)
Abstract
Urban growth boundaries are considered one of the key tools for controlling and managing the physical development of metropolitan areas. Uncontrolled and unplanned expansion in these regions has become a major challenge for urban and regional planners and managers, as this process leads to the destruction of agricultural lands and natural resources. The aim of this research is to simulate and assess future changes in growth boundaries in the Isfahan metropolitan area with the goal of preserving environmental resources and controlling physical expansion. In this regard, by adopting a positivist approach that follows an analytical and measurement-driven process, satellite imagery was utilized to assess changes in the physical expansion of the Isfahan metropolitan area. Artificial neural networks and machine learning algorithms were employed to predict the extent of future physical growth, and the projected growth boundaries were delineated. The research findings indicate that the Isfahan metropolitan area has experienced significant uncontrolled expansion, particularly in terms of physical development, over recent decades, and the reduction of agricultural and natural lands has become one of its major challenges. Based on the conducted simulations, the proposed growth boundaries can serve as an effective tool for managing and planning urban-regional development and preventing further degradation of natural resources and lands.
Sediqeh Soleimanifard, Nafiseh Jahanbakhshian, Somayeh Niknia,
Volume 0, Issue 0 (1-2024)
Abstract
The present study investigates the effect of baking temperatures (140, 160, 180, 200, and 220℃) on texture kinetics. It also explores a statistical classification meta-algorithm, called Adaptive Boosting (AdaBoost), to predict texture changes during conventional cake baking. The experimental results indicated that texture properties were significantly affected by baking temperature and time. As time and temperature increased, there was an increase in hardness, cohesiveness, gumminess, and chewiness and a decrease in springiness. However, the impact of time and temperature on resilience was inconsistent, as it was maximum in the last quarter of the process. The predicted results revealed that the AdaBoost algorithm accurately predicted the texture properties with a high coefficient of determination (R2 > 0.989) and minimal root mean square error (RMSE < 0.0019) across all textural properties. Therefore, it can serve as an efficient tool for predicting the texture properties of cakes during baking. Furthermore, the proposed methodology can be extended to predict the texture properties of other baked goods.
Volume 0, Issue 0 (8-2024)
Abstract
Moisture damage in asphalt mixtures poses significant challenges to infrastructure durability, necessitating accurate modeling for effective mitigation strategies due to the complex nature of moisture susceptibility. Current tests, such as those utilizing general indicators like the indirect tensile strength ratio, examine moisture susceptibility in asphalt mixtures. However, these tests incur substantial costs and require considerable time. Therefore, this study aims to develop moisture susceptibility prediction models using Multi-Gene Genetic Programming (MGGP). The research utilized four types of aggregates (two limestone and two granite types) and eight different Performance Grade (PG) bitumen types. The modified Lottman test method (AASHTO T283) was employed for moisture susceptibility assessment, with samples subjected to specific conditioning protocols including vacuum saturation (13-67 kPa absolute pressure), freeze-thaw cycles (-18°C for 16 hours), and hot water conditioning (60°C for 24 hours). Indirect tensile strength tests were conducted under controlled loading conditions (2 Hz frequency, 0.1s loading time, 0.4s rest period) at 25°C. The dataset comprised 34 samples and 11 variables to predict two key indicators: Inflection Stripping Point (ISP) and Stripping Slope (SS). The MGGP model demonstrated remarkable performance in predicting both ISP and SS, achieving R2 values of 0.981 and 0.974 for the test data, respectively. Several crucial parameters were analyzed, including the apparent film thickness (AFT) calculated using aggregate specific surface area, permeability measured through falling head test method (ASTM PS 129-01), and surface free energy components. The surface energy analysis incorporated both cohesive free energy (CFE) and adhesive free energy (AFE), with special attention to the acid-base theory components: Lifshitz-van der Waals (LW), Lewis acid (Γ+), and Lewis base (Γ-) components. For ISP prediction, the MGGP model identified key variables including the ratio of base to acid surface free energy (SFE), asphalt-water adhesion (ΓAsphalt-Water), cohesive free energy (CFE), adhesive free energy (AFE), permeability of asphalt mixture (PAM), asphalt film thickness (AFT), and degree of saturation (DS). The model for SS prediction emphasized the importance of ΓAsphalt-Water, aggregate-water adhesion (ΓAggregate-Water), wettability, specific surface area (SSA), PAM, and DS. The study employed various performance metrics to evaluate the MGGP models. For ISP predictions, the model achieved RMSE, MSE, and MAE values of 5.228, 27.337, and 3.843, respectively. For SS predictions, these values were 0.294, 0.086, and 0.231, respectively, indicating high accuracy and low error rates. These results surpass those of previous studies employing traditional Genetic Programming (GP) methods, highlighting the potential of MGGP as a powerful tool in modeling asphalt moisture susceptibility. The practical implications of this research are significant for improving asphalt mixture durability, reducing maintenance costs, and enhancing road safety. Future research could focus on validating the models across a broader range of asphalt mixtures and environmental conditions.
Volume 0, Issue 0 (12-2024)
Abstract
Aim and Introduction
Achieving sustained and long-term economic growth necessitates the optimal allocation and utilization of resources at the national level. This goal relies heavily on the existence of efficient financial markets, particularly well-functioning and extensive capital markets. Numerous macroeconomic variables can influence the level of risk associated with shareholder rights, corporate cash flows, and adjusted discount rates. Additionally, changes in economic conditions can alter both the quantity and nature of investment opportunities.
However, establishing a fixed and consistent relationship between macroeconomic variables and stock price indices remains challenging. The complex and dynamic nature of financial markets makes it difficult to identify a method that accurately reflects economic conditions and captures the most critical influencing variables. Therefore, this study employs machine learning models to identify the key macroeconomic factors affecting stock price indices.
Methodology
Feature selection is one of the most common and crucial techniques in data preprocessing and serves as an essential component of machine learning. This study employs feature selection models to identify the most relevant predictors of the stock price index. The models utilized include the random forest method and regularized linear regression. To examine the nature of the relationships between variables, the jointness method was applied. Additionally, the mutual information analysis was conducted to assess the influence of key variables over different decades, enabling a deeper understanding of how the impact of macroeconomic factors on stock prices has evolved over time.
Findings
The study analyzed the impact of selected macroeconomic variables on stock price indices, focusing on the Tehran Stock Exchange. The findings from the Random Forest (RF) and Regularized Linear Regression (RLR) models indicate that exchange rates, financial development, inflation, economic growth, trade openness, and global uncertainty significantly influence Iran’s stock price index. The results demonstrate that global uncertainty, interest rates, and trade openness exert negative effects on stock prices, whereas the other variables positively influence stock prices.
The jointness method was employed to analyze the relationships between these variables, further confirming their significance. Moreover, the Mutual Information method was used to examine how the influence of these key variables varied across different decades.
Discussion and Conclusion
Among the variables examined, exchange rates, financial development, inflation, economic growth, trade openness, and global uncertainty emerged as the most significant factors influencing Iran’s stock price index. This finding is not surprising, given Iran’s historical experience with significant exchange rate fluctuations and persistent inflationary pressures. Global uncertainty has consistently influenced domestic markets in Iran due to political and economic instability. Previous research has highlighted the complex relationship between exchange rate fluctuations and stock price indices (Ratanapakorn & Sharma, 2007). Scholars have argued that the relationship between stock prices and exchange rates can significantly affect monetary and fiscal policy, as a recessionary stock market can reduce overall demand and impact broader economic performance.
Extensive research has also investigated the relationship between inflation and stock prices, identifying inflation as a significant factor affecting stock indices
(Boudoukh & Richardson, 1993; Fama & Schwert, 1977; Jaffe & Mandelker, 1976) . While some studies have reported a positive correlation between inflation and stock prices, others have found a negative relationship.
Moreover, trade openness has been recognized as a key factor influencing stock market fluctuations. Open economies are more vulnerable to external shocks due to increased global risk-sharing among markets. Although some studies have not found conclusive evidence of a direct effect between trade openness and stock prices, trade openness remains one of the influential factors (Nickmansh, 2016).
Stock prices reflect the present value of future cash flows, which are subject to two main effects: cash flow changes driven by increased production and interest rates, which serve as a discount factor. Stock prices tend to decline when expected cash flows decrease or interest rates rise. The level of actual economic activity directly influences cash flows, as higher economic activity generally leads to increased cash flow. Among the various indicators used to predict commodity markets, real Gross Domestic Product (GDP) is considered the most comprehensive measure of economic activity (Yuhasin, 2011; Christopher et al., 2006).
mouseout="msoCommentHide('_com_1')" onmouseover="msoCommentShow('_anchor_1','_com_1')">Finally, global uncertainty plays a significant role in shaping the internal economic environment of countries, making it an important global macroeconomic variable that influences the performance of publicly traded companies on the stock exchange.
mouseout="msoCommentHide('_com_1')" onmouseover="msoCommentShow('_anchor_1','_com_1')">
Volume 5, Issue 2 (8-2024)
Abstract
Aims: Today, the use of artificial intelligence has grown significantly, and is developing as a new field. The main goal of this research is to know the capabilities of artificial intelligence in advancing the design and implementation process in the artificial environment. The practical goal of research is the development and application of the most important achievements of machine learning in the field of design.
Methods: The main research method is "meta-analysis" research in the paradigm of "free research" with a critical approach and basic design, which examines the general knowledge field of this field using broad techniques. Then, to consolidate the literature on the topic, through searching three reliable knowledge bases of this field, we collected articles related to machine learning in the fields of unsupervised learning methods, semi-supervised learning, and reinforcement learning; The most important capacities and shortcomings, and strengths and weaknesses are reviewed.
Findings: Quantitative findings from the combined data indicate that supervised machine learning and directed deep learning can be the best option to recommend in the future of design. While the learning process in deep learning is gradual and slower, supervised machine learning works faster in the testing phase.
Conclusion: The research emphasizes that supervised machine learning is the best option for predicting answers in the design process. But if, in addition to prediction, the issue of creativity in design is desired, deep learning is more efficient.
Volume 7, Issue 3 (7-2019)
Abstract
Aims: Health literacy (HL) is the main factor shows health literate level of people in a certain society. Discovering and understanding affective factors on HL level could lead experts to improve these factors in the target community. This study aimed to Health Literacy classification of population and find a major component with data mining approaches.
Instruments and Methods: In this paper, we have acquired more details about major factors on the health literacy level of target society by assessing evolutionary methods. We benefit of Particle Swarm Optimization (PSO) and KNN and fuzzy KNN algorithm for classification and use wrapper technique for feature selection by our model. Feature selection are done as weighted features and selects the most effective features of health literacy. Our proposed model evaluates a data set of Health Literacy by two classifiers with/without fuzzy logic. Applied data set is a real data gathered from a descriptive-analytic cross-sectional study on adult population include 2133 record with 74 attributes in 2016 at South Khorasan province. We have gained effective factors on HL level of the population according to regions and total population without using any statistical analysis tools with the lowest human interference by an evolutionary method.
Findings: Proposed model have found effective factors on the health literacy level of population in South Khorasan province. Results are obtained 92.02% accuracy for the total population and 97.99% for regions population.
Conclusion: Simulations demonstrate the evolutionary method is a suitable way for extracting results from health data sets and also shows the superiority of the proposed method.
Volume 9, Issue 2 (9-2018)
Abstract
Aims: One of the most important areas in medical research is the identification of disease-causing genes, which helps the identification of mechanisms underlying disease and as a result helps the early diagnosis of disease and the better treatment. In recent years, microarray technology has assisted biologists to gain a better understanding of cellular processes. To this end, the application of efficient methods in microarray data analysis is very important. The aim of this study was the introduction of GRAP Gene as Alzheimer’s disease candidate gene using microarray data analysis.
Materials and Methods: In the present bioinformatic study, which was conducted on an Alzheimer's microarray data set containing 12990 genes, 15 patients, and 16 healthy subjects, by combining Fisher, Significance Analysis of Microarray (SAM), and Particle Swarm Optimization (PSO) methods as well as Classification and Regression Tree (CART), a new method was presented for analyzing microarray gene expression data to identify genes involved in Alzheimer's incidence.
Findings: The accuracy level of the proposed method was 90.32% and the interpretation of the results from a biological point of view indicated that the proposed method has worked well; finally, the proposed method introduced 4 genes, of which, until now, 3 genes (75%) have been reported in biological studies as genes that cause Alzheimer’s disease.
Conclusion: In addition to proposing a new feature selection method for the analysis of microarray data, this study has introduced a new gene (GRAP) as a candidate gene related to Alzheimer’s disease.
Volume 9, Issue 3 (7-2021)
Abstract
Aims: The world hospital systems are presently facing many unprecedented challenges from COVID‐19 disease. Prediction the deteriorating or critical cases can help triage patients and assist in effective medical resource allocation. This study aimed to develop and validate a prediction model based on Machine Learning algorithms to predict hospitalized COVID-19 patients for transfer to ICU based on clinical parameters.
Materials & Methods: This retrospective, single-center study was conducted based on cumulative data of COVID-19 patients (N=1225) who were admitted from March 9, 2020, to December 20, 2020, to Mostafa Khomeini Hospital, affiliated to Ilam University of Medical Sciences (ILUMS), focal point center for COVID-19 care and treatment in Ilam, West of Iran. 13 ML techniques from six different groups applied to predict ICU admission. To evaluate the performances of models, the metrics derived from the confusion matrix were calculated. The algorithms were implemented using WEKA 3.8 software.
Findings: This retrospective study's median age was 50.9 years, and 664 (54.2%) were male. The experimental results indicate that Meta algorithms have the best performance in ICU admission risk prediction with an accuracy of 90.37%, a sensitivity of 90.35%, precision of 88.25%, F-measure of 88.35%, and ROC of 91%.
Conclusion: Machine Learning algorithms are helpful predictive tools for real-time and accurate ICU risk prediction in patients with COVID-19 at hospital admission. This model enables and potentially facilitates more responsive health systems that are beneficial to high-risk COVID-19 patients.
Volume 10, Issue 1 (1-2022)
Abstract
Aims: Breast cancer represents one of the most prevalent cancers and is also the main cause of cancer-related deaths in women globally. Thus, this study was aimed to construct and compare the performance of several rule-based machine learning algorithms in predicting breast cancer.
Instrument & Methods: The data were collected from the Breast Cancer Registry database in the Ayatollah Taleghani Hospital, Abadan, Iran, from December 2017 to January 2021 and had information from 949 non-breast cancer and 554 breast cancer cases. Then the mean values and K-nearest neighborhood algorithm were used for replacing the lost quantitative and qualitative data fields, respectively. In the next step, the Chi-square test and binary logistic regression were used for feature selection. Finally, the best rule-based machine learning algorithm was obtained based on comparing different evaluation criteria. The Rapid Miner Studio 7.1.1 and Weka 3.9 software were utilized.
Findings: As a result of feature selection the nine variables were considered as the most important variables for data mining. Generally, the results of comparing rule-based machine learning demonstrated that the J-48 algorithm with an accuracy of 0.991, F-measure of 0.987, and also AUC of 0.9997 had a better performance than others.
Conclusion: It’s found that J-48 facilitates a reasonable level of accuracy for correct BC risk prediction. We believe it would be beneficial for designing intelligent decision support systems for the early detection of high-risk patients that will be used to inform proper interventions by the clinicians.
Volume 10, Issue 4 (12-2019)
Abstract
Gene expression, flow of information from DNA to proteins, is a fundamental biological process. Expression of one gene can be regulated by the product of another gene. These regulatory relationships are usually modeled as a network; genes are modeled as nodes and their relationships are shown as edges. There are many efforts for discovering how genes regulate expression of themselves. This paper presents a new method that employs expression data and ontological data to infer co-expression networks, networks made by connecting genes with similar expression patterns. In brief, the method begins by learning associations between the available ontological information and the provided co-expression data. Later, the method is able to find both known and novel co-expressed pairs of genes. Finally, the method uses a self-organizing map to adjust estimation made by the previous step and to form the GCN for the input genes. The results show that the proposed method works well on the biological data and its predictions are accurate; consequently, co-expression networks generated by the proposed method are very similar to the biological networks or those that constructed with no missing data. The method is written in C++ language and is available upon request from the corresponding author.
Volume 11, Issue 3 (10-2023)
Abstract
Aims: The COVID-19 pandemic has led to the global distribution of vaccines, but there are concerns regarding potential side effects. Hair loss is one of the less commonly reported side effects. The present study aimed to investigate the effect of COVID-19 vaccinations on hair loss.
Instruments & Methods: A cross-sectional descriptive study was conducted with 580 participants aged between 20 to 72 years, consisting of 270 males and 310 females. Machine learning techniques were employed to analyze the data and determine any potential relationship between COVID-19 vaccines and hair loss. A logistic regression analysis was used to assess the odds ratio and 95% confidence interval for hair loss.
Findings: Of the total participants, 17.6% reported experiencing hair loss after receiving the COVID-19 vaccine. This percentage was higher in females (19.4%) compared to the males (15.2%). There was a significant association between the COVID-19 vaccine and hair loss in both males and females. The odds ratio for developing hair loss after receiving the COVID-19 vaccine was 1.34 (95% CI: 1.04-1.73) for females and 1.12 (95% CI: 0.81-1.54) for males.
Conclusion: Hair loss is a rare but possible side effect of COVID-19 vaccination in both males and females, which its prevalence is higher in females than in males. Individuals with certain comorbidities, such as hypertension and diabetes, may be at a higher risk for experiencing hair loss after COVID-19 vaccination.
Volume 12, Issue 4 (10-2024)
Abstract
Aims: Artificial intelligence (AI) and machine learning (ML) are revolutionizing healthcare by enhancing the prediction of learning needs and enabling tailored educational interventions for patients and staff. This study explores the application of AI and ML models to predict learning needs from the patient's perspective.
Instruments & Methods: Three ML models (Linear Regression, Random Forest, and Gradient Boosting) were trained on health literacy, demographic, and treatment data from 218 cancer patients at Sultan Qaboos Comprehensive Cancer Center. Evaluation metrics included Mean Absolute Error (MAE), Root Mean Squared Error (RMSE), R2 Score, and Area Under the Curve (AUC). Classification models (Random Forest, Gradient Boosting, Decision Tree, and Extra Trees) were assessed for accuracy, precision, recall, F1-score, and AUC in categorizing learning needs.
Findings: Gradient Boosting had the best predictive performance (MAE:0.0534, RMSE: 0.0788, R²:0.9844, AUC:0.96), followed by Random Forest (AUC:0.93). Linear Regression was less effective (AUC: 0.85). Key predictors included literacy level in chemotherapy, hormonal therapy, and treatment experiences, while demographic factors had minimal impact. For classification, Gradient Boosting and Decision Tree models achieved the highest accuracy (96.51%) and AUC (0.96). Random Forest showed 94.19% accuracy, while Extra Trees had 90.70%, indicating variability in model performance.
Conclusion: AI and ML, particularly Gradient Boosting, demonstrate strong potential in predicting and categorizing learning needs.
Volume 13, Issue 1 (3-2025)
Abstract
Volume 14, Issue 3 (11-2024)
Abstract
Objective: Timely delivery of medications, medical equipment, and other essential supplies is critical to patient care and can often be life-saving. Delivery delays in the healthcare supply chain can lead to increased costs and operational challenges for healthcare organizations and affect patient care and financial stability. Efficient and reliable supply chain management is critical to reduce these risks and ensure integrated performance in the health industry. This research addresses the delay in the delivery of health commodities in the global health supply chain of the United States Agency for International Development. It presents a framework based on the support vector machine technique and Bayesian optimization to predict the delivery status of health commodities. It also determines the features that have had the greatest impact in predicting the status of commodities delivery for data-driven health supply chain management.
Method: The study's research method is design science, which presents a framework based on the support vector machine technique and Bayesian optimization to predict the delivery status of health commodities. It also compares the performance of different classification algorithms to predict the transportation status.
Findings: The results indicate that the presented framework based on the support vector machine technique and Bayesian optimization leads to a classification accuracy of 95%, outperforming other techniques to predict delivery delay. The results showed that the features of the destination country, shipping method, supplier, and production location are the most influential features in predicting the delivery status.
Volume 16, Issue 1 (12-2024)
Abstract
Blood pressure monitoring is a vital component of maintaining overall health. High blood pressure values, as a risk factor, can lead to heart attacks, strokes, and heart and kidney failures. Similarly, low blood pressure values can also be dangerous, causing dizziness, weakness, fainting, and impaired oxygen delivery to organs, resulting in brain and heart damage. Consequently, continuous monitoring of blood pressure levels in high-risk individuals is very important. A Holter blood pressure monitoring device is prescribed for many patients due to its ability to provide long-term and valuable blood pressure data. The pursuit of software techniques and the development of cuffless blood pressure measurement devices, while ensuring patient comfort and convenience, are among the significant challenges that researchers are focusing on. In this study, a deep learning framework based on the UNet network is proposed for continuous blood pressure estimation from photoplethysmography signals. The proposed model was evaluated on the UCI database, involving 942 patients under intensive care, and achieved mean absolute errors of 8.88, 4.43, and 3.32, with standard deviations of 11.01, 6.18, and 4.15, respectively, for systolic, diastolic, and mean arterial blood pressure values. According to the international BHS standard, the proposed method meets grade A criteria for diastolic and mean blood pressure estimations and grade C for systolic blood pressure estimation. The results of this study demonstrate that the suggested deep learning framework has the necessary potential for blood pressure estimation from PPG signals in real-world applications.
Volume 20, Issue 1 (4-2020)
Abstract
Generally, labyrinth weirs pass more water compared to their equivalent rectangular weirs. Thus, these types of weirs are popular amongst hydraulic and environmental engineers. In this paper, for the first time, a novel artificial intelligence (AI) technique called "outlier robust extreme learning machine (ORELM)" is used to estimate the discharge coefficient of labyrinth weirs. The ORELM method has been proposed in order to overcome the difficulties of the classical ELM in predicting datasets with outliers. In this method, the concept of “sparsity characteristic of outliers” is used. Also, in this study, to verify the results of the numerical models the experimental measurements conducted by Kumar et al. (2011) and Seamons (2014) are employed. The experimental model established by Kumar et al. (2011) is composed of a rectangular channel with a length of 12m, a width of 0.28m and a depth of 0.41m. The weir is made of steel sheets and placed at an 11m distance from rectangular channel inlet. Also, Seamons (2014) experimental model has been set up in a rectangular channel with the length, width and height of 14.6m, 1.2m and 0.9m, respectively. First, the number of the hidden layer neurons initials from 5 and continues to 45 and the most optimal number the hidden layer neurons are taken into account equal to 5. In this study, the Monte Carlo simulations are used for examining the abilities of the numerical models. The main idea of this method is based on solving problems which might be actual in nature using random decision-making. The Monte-Carlo methods are usually implemented for simulating physical and mathematical systems which are not solvable by means of other methods. In this paper, the K-fold cross validation method is employed for validating the results of the numerical models. To this end, the observational data are divided into five equal sets and each time one set of these data is used for testing the numerical model and the rest for training it. This procedure is repeated five times and each test is used exactly once to train and once to test. This method increases the flexibility of the numerical model when dealing with the observational data, and it can be said that the numerical model has the ability to model a greater range of laboratory data. For instance, the maxim value of R2 is obtained for the K=4 case (R2=0.954), while for the K=5 case the values of RMSE and MARE are estimated 0.034 and 4.408, respectively. After that, different activation functions are evaluated in order to detect the most accurate one for the numerical model. Subsequently, six different ORELM models are developed using the parameters affecting the discharge coefficient of labyrinth weirs. Also, the superior model and the most effective input parameters are identified through a sensitivity analysis. For example, the values of R2, RMSRE and NSC for the superior model are calculated 0.943, 5.224 and 0.940, respectively. Furthermore, the ratio of the head above the weir to the weir height (HT/P) and the ratio of the width of a single cycle to the weir height (w/P) are introduced as the most important input parameters. Also, the results of the ORELM superior model are compared with the artificial intelligence models including the extreme learning machine, artificial neural network and the support vector machine and it is concluded that ORELM has a better performance. Then, an uncertainty analysis is conducted for the ORELM, ELM, ANN and SVM models and it is proved that ORELM has an overestimated performance.
Volume 21, Issue 6 (12-2021)
Abstract
In recent decades, the science of structural health monitoring has played a key role in preventing damage and extending the life of structures. To conduct behavioral assessment, it is desirable to use tools that achieve sufficient accuracy with low cost. The processing of behavioral data requires methods that are able to identify and correctly troubleshoot different levels of damage from existing information.
Nowadays, sensors are used to measure the behavior of structures including deformations and displacements and even deflections, but these sensors have some weak points. For example, Risk of damage to the sensor, pointwise and one-dimensional measuring, their data is difficult to analyze and using multiple or high-tech sensors becomes expensive.
Optical behavior measurement and close-range photogrammetric operations have recently received attention due to their low cost and good accuracy. This method has some advantages like Indirect contact with objects, high-speed image capture, easy access to convenient digital cameras, low viewing costs, and the ability to process composite and instant data with easy operation. In addition, the high flexibility of this method in measuring accuracy and design capability to achieve predetermined accuracy is an important feature of this tool.
Analytical methods are based on rules or equations that provide a clear definition of the problem. These methods work well in the cases which the rules are accurately clear and defined but there are many practical cases for which the rules are not known or it is very difficult to discover that calculations cannot be performed using analytical methods.
Neural network is a generalizable model, which is based on the experience of a set of training data and therefore free of explicit law. Neural networks have the ability to collect, store, analyze, and process large amounts of data from numerical analyzes or experiments. Therefore, they have the ability to predict and build diagnostic models to solve various engineering problems and tasks
In this paper, an attempt has been made to use this method to measure and troubleshoot laboratory model of a scaled suspension bridge that has a relatively complex behavior. For this purpose, the structure was subjected to uniform static loading in three step levels with three states: healthy and damaged in the deck and cables. Damages were created quite intentionally in the laboratory model, and from the information obtained, a database of bridge behavior in various situations was created. In order to assess the feasibility of using different methods in data processing and troubleshooting, first the data in the database were used in a simple linear method (direct comparison) and training in algorithms of machine learning methods. After that, deliberate damage was done again in the laboratory structure to allow testing the efficiency and accuracy of different methods. Finally, the accuracy, precision, and stability of the data processing methods of the support vector machine and artificial neural network were compared.
The results showed that by object bundle justification of two-dimensional optical behaver measurement with close-range photogrammetry, a guaranteed accuracy of 0.0021 mm could be achieved. Using intensity image processing seems helpful to ease the calculation. Using high number of nodes in hidden layer makes it more difficult and time-consuming to train the neural network. In the first level of processing, the detection of the presence or absence of damage was associated with the complete superiority of neural networks with 100% accuracy and in the second level, the detection of the affected area, depending on the type of processing, the neural network with hyperbolic tangent transfer function archived 93% accuracy and the support vector machine archived 68% of the accuracy.
Volume 21, Issue 8 (8-2021)
Abstract
To minimize the cost of maintenance and repair of rotating industrial equipment, one of the methods used is condition monitoring by sound analysis. This study was performed to diagnose the fault of a single-phase electric motor through machine learning method aiming to monitor its situation by sound analysis. Test conditions included healthy state, bearing failure, shaft imbalance and shaft wear at two speeds of 500 and 1400 rpm. A microphone was installed on the electric motor to record data. After data acquisition, signal processing and statistical analysis, the best characteristics were selected by PCA method and then the data were clustered by machine learning method and K mean algorithm. These features used in the ANFIS modeling process were common features selected in both electromotor speed situations. After evaluating the models, the best model had the highest accuracy value of 96.82%. The average accuracy was 96.71% for overall fault classification. The results showed that the analysis of acoustic signals and modeling process can be used to diagnose electromotor defects by machine learning method. Based on the obtained results, condition monitoring of the electromotor through acoustic analysis reduces its stop and continues its work process in the industry. The repair costs of the electromotor are reduced by its proper condition monitoring.
Volume 21, Issue 151 (8-2024)
Abstract
Changing the thermos-mechanical properties, variety of formulation and storage conditions, 36 samples of low-fat mozzarella cheese were produced and their hardness, adhesiveness, cohesiveness, springiness, cohesiveness, gumminess and chewiness were evaluated by TPA followed by analyzing data using completely randomized factorial design with univariate analysis through IBM SPSS Statistics. 26. Then, Imaging of the same samples with a Hyperspectral camera in the range of 400-1000 nm as well as pre-processing the spectra and preferring the important wavelengths by feature selection algorithms to developed the calibration models including multiple linear regression algorithms, partial least squares regression, support vector machine with a linear kernel, multilayer perceptron neural network, random forests and majority voting algorithm was performed in Python software followed by the performance of models were evaluated. Results showed that the more increased the stretching time in hot water from 2 to 8 minutes, the more the hardness, springiness, gumminess and chewiness and cohesiveness increased, but adhesiveness was decreased. The majority vote algorithm (VOTING) revealed the highest performance in hardness prediction (R2p=0.878, RMSEp=2606.52 and RPD=2.12) and was able to predict the cohesiveness of mozzarella with higher accuracy more than other algorithms. Multiple linear regression couldn’t predict the adhesiveness properly, but random forest method with high performance predicted this feature (R2p=0.808, RMSE=56.49, RPD=1.90). The multi-layer perceptron neural network with the least error, predicted springiness (R2p = 0.848, RMSEp = 0.094, RPD = 2.12) and chewiness (R2p = 0.84, RMSEp = 1117.21, RPD = 1.96) with high accuracy. All methods except random forest were able to predict the gumminess of mozzarella with high efficiency. In this study, it was cleared that the process conditions had significant effects on the textural characteristics and the Hyperspectral imaging was found to be a suitable alternative method for estimating the textural characteristics of mozzarella cheese.
Volume 26, Issue 4 (3-2023)
Abstract
Mines and their related-industries are able to affect their surrounding environment, not only by their activities, but also after being abandoned. Among their different harmful effects, under water and surface water contaminations, and soil contamination can be mentioned. In order to manage these environmental effects, it is necessary to use reasonable methods for modelling heavy metal concentration in soil. This study aims to present a framework for modelling heavy metal soil contamination based on spectroscopy and statistical models. For this purpose, the spectral curves of the 53 soil samples, derived from an abandoned mine and its surrounding areas in New South Wales, Australia, were collected using a spectroradiometer in visible to short wavelength infrared (SWIR) wavelengths. Calculating the second derivative of the collected spectral data, random forest feature selection method (RFFS) was used to determine the most important spectral data for modelling heavy metal concentrations including lead, silver, cadmium and mercury. Then, the modelling techniques including multiple linear regression, random forest regression, and support vector regression (SVR) were applied on the selected spectral data. The results indicated that SWIR wavelengths are the most important spectral data for modelling heavy metal concentrations. Moreover, the non-linear machine learning methods, especially random forest with RMSE of 0.8 ppm and R2 of 0.51 for lead and RMSE of 9.4 ppm and R2 of 0.46 for cadmium performed better than multiple linear regression.