scholarly journals Logistic Regression for Machine Learning in Process Tomography

Sensors ◽  
2019 ◽  
Vol 19 (15) ◽  
pp. 3400 ◽  
Author(s):  
Tomasz Rymarczyk ◽  
Edward Kozłowski ◽  
Grzegorz Kłosowski ◽  
Konrad Niderla

The main goal of the research presented in this paper was to develop a refined machine learning algorithm for industrial tomography applications. The article presents algorithms based on logistic regression in relation to image reconstruction using electrical impedance tomography (EIT) and ultrasound transmission tomography (UST). The test object was a tank filled with water in which reconstructed objects were placed. For both EIT and UST, a novel approach was used in which each pixel of the output image was reconstructed by a separately trained prediction system. Therefore, it was necessary to use many predictive systems whose number corresponds to the number of pixels of the output image. Thanks to this approach the under-completed problem was changed to an over-completed one. To reduce the number of predictors in logistic regression by removing irrelevant and mutually correlated entries, the elastic net method was used. The developed algorithm that reconstructs images pixel-by-pixel is insensitive to the shape, number and position of the reconstructed objects. In order to assess the quality of mappings obtained thanks to the new algorithm, appropriate metrics were used: compatibility ratio (CR) and relative error (RE). The obtained results enabled the assessment of the usefulness of logistic regression in the reconstruction of EIT and UST images.

With the blessings of Science and Technology, as the death rate is getting decreased, population is getting increased. With that, the utilization of Land is also getting increased for urbanization for which the quality of Land is degrading day by day and also the climates as well as vegetations are getting affected. To keep the Land quality at its best possible, the study on Land cover images, which are acquired from satellites based on time series, spatial and colour, are required to understand how the Land can be used further in future. Using NDVI (Normalized Difference Vegetation Index) and Machine Learning algorithms (either supervised or unsupervised), now it is possible to classify areas and predict about Land utilization in future years. Our proposed study is to enhance the acquired images with better Vegetation Index which will segment and classify the data in more efficient way and by feeding these data to the Machine Learning algorithm model, higher accuracy will be achieved. Hence, a novel approach with proper model, Machine Learning algorithm and greater accuracy is always acceptable


Author(s):  
Jie Yuan ◽  
Yuan Ji ◽  
Zhou Zhu ◽  
Liya Huang ◽  
Junfeng Qian ◽  
...  

In order to solve the problems of large error and low performance of traditional progressive image model matching information checking methods, an automatic progressive image model matching information checking method based on machine learning is proposed. The generation method of progressive image is analyzed, and the target image sample is obtained. On this basis, machine learning algorithm is used to segment progressive image samples. In each image segmentation part, crawler technology is used to automatically collect progressive image model matching information, and under the constraint of image model matching information checking standard, automatic checking of progressive image model matching information is realized from geometric structure, image content and other aspects. Experimental results show that the verification error of the design method is reduced by 0.687 Mb, and the quality of progressive image is improved.


2021 ◽  
Vol 143 (2) ◽  
Author(s):  
Joaquin E. Moran ◽  
Yasser Selima

Abstract Fluidelastic instability (FEI) in tube arrays has been studied extensively experimentally and theoretically for the last 50 years, due to its potential to cause significant damage in short periods. Incidents similar to those observed at San Onofre Nuclear Generating Station indicate that the problem is not yet fully understood, probably due to the large number of factors affecting the phenomenon. In this study, a new approach for the analysis and interpretation of FEI data using machine learning (ML) algorithms is explored. FEI data for both single and two-phase flows have been collected from the literature and utilized for training a machine learning algorithm in order to either provide estimates of the reduced velocity (single and two-phase) or indicate if the bundle is stable or unstable under certain conditions (two-phase). The analysis included the use of logistic regression as a classification algorithm for two-phase flow problems to determine if specific conditions produce a stable or unstable response. The results of this study provide some insight into the capability and potential of logistic regression models to analyze FEI if appropriate quantities of experimental data are available.


2022 ◽  
Vol 11 (1) ◽  
pp. 325-337
Author(s):  
Natalia Gil ◽  
Marcelo Albuquerque ◽  
Gabriela de

<p style="text-align: justify;">The article aims to develop a machine-learning algorithm that can predict student’s graduation in the Industrial Engineering course at the Federal University of Amazonas based on their performance data. The methodology makes use of an information package of 364 students with an admission period between 2007 and 2019, considering characteristics that can affect directly or indirectly in the graduation of each one, being: type of high school, number of semesters taken, grade-point average, lockouts, dropouts and course terminations. The data treatment considered the manual removal of several characteristics that did not add value to the output of the algorithm, resulting in a package composed of 2184 instances. Thus, the logistic regression, MLP and XGBoost models developed and compared could predict a binary output of graduation or non-graduation to each student using 30% of the dataset to test and 70% to train, so that was possible to identify a relationship between the six attributes explored and achieve, with the best model, 94.15% of accuracy on its predictions.</p>


2021 ◽  
Vol 22 (Supplement_1) ◽  
Author(s):  
M Omer ◽  
A Amir-Khalili ◽  
A Sojoudi ◽  
T Thao Le ◽  
S A Cook ◽  
...  

Abstract Funding Acknowledgements Type of funding sources: Public grant(s) – National budget only. Main funding source(s): SmartHeart EPSRC programme grant (www.nihr.ac.uk), London Medical Imaging and AI Centre for Value-Based Healthcare Background Quality measures for machine learning algorithms include clinical measures such as end-diastolic (ED) and end-systolic (ES) volume, volumetric overlaps such as Dice similarity coefficient and surface distances such as Hausdorff distance. These measures capture differences between manually drawn and automated contours but fail to capture the trust of a clinician to an automatically generated contour. Purpose We propose to directly capture clinicians’ trust in a systematic way. We display manual and automated contours sequentially in random order and ask the clinicians to score the contour quality. We then perform statistical analysis for both sources of contours and stratify results based on contour type. Data The data selected for this experiment came from the National Health Center Singapore. It constitutes CMR scans from 313 patients with diverse pathologies including: healthy, dilated cardiomyopathy (DCM), hypertension (HTN), hypertrophic cardiomyopathy (HCM), ischemic heart disease (IHD), left ventricular non-compaction (LVNC), and myocarditis. Each study contains a short axis (SAX) stack, with ED and ES phases manually annotated. Automated contours are generated for each SAX image for which manual annotation is available. For this, a machine learning algorithm trained at Circle Cardiovascular Imaging Inc. is applied and the resulting predictions are saved to be displayed in the contour quality scoring (CQS) application. Methods: The CQS application displays manual and automated contours in a random order and presents the user an option to assign a contour quality score 1: Unacceptable, 2: Bad, 3: Fair, 4: Good. The UK Biobank standard operating procedure is used for assessing the quality of the contoured images. Quality scores are assigned based on how the contour affects clinical outcomes. However, as images are presented independent of spatiotemporal context, contour quality is assessed based on how well the area of the delineated structure is approximated. Consequently, small contours and small deviations are rarely assigned a quality score of less than 2, as they are not clinically relevant. Special attention is given to the RV-endo contours as often, mostly in basal images, two separate contours appear. In such cases, a score of 3 is given if the two disjoint contours sufficiently encompass the underlying anatomy; otherwise they are scored as 2 or 1. Results A total of 50991 quality scores (24208 manual and 26783 automated) are generated by five expert raters. The mean score for all manual and automated contours are 3.77 ± 0.48 and 3.77 ± 0.52, respectively. The breakdown of mean quality scores by contour type is included in Fig. 1a while the distribution of quality scores for various raters are shown in Fig. 1b. Conclusion We proposed a method of comparing the quality of manual versus automated contouring methods. Results suggest similar statistics in quality scores for both sources of contours. Abstract Figure 1


Author(s):  
Alexandre Todorov

The aim of the RELIEF algorithm is to filter out features (e.g., genes, environmental factors) that are relevant to a trait of interest, starting from a set of that may include thousands of irrelevant features. Though widely used in many fields, its application to the study of gene-environment interaction studies has been limited thus far. We provide here an overview of this machine learning algorithm and some of its variants. Using simulated data, we then compare of the performance of RELIEF to that of logistic regression for screening for gene-environment interactions in SNP data. Even though performance degrades in larger sets of markers, RELIEF remains a competitive alternative to logistic regression, and shows clear promise as a tool for the study of gene-environment interactions. Areas for further improvements of the algorithm are then suggested.


2020 ◽  
Vol 44 (1) ◽  
pp. 231-269
Author(s):  
Rong Chen

Abstract Plural marking reaches most corners of languages. When a noun occurs with another linguistic element, which is called associate in this paper, plural marking on the two-component structure has four logically possible patterns: doubly unmarked, noun-marked, associate-marked and doubly marked. These four patterns do not distribute homogeneously in the world’s languages, because they are motivated by two competing motivations iconicity and economy. Some patterns are preferred over others, and this preference is consistently found in languages across the world. In other words, there exists a universal distribution of the four plural marking patterns. Furthermore, holding the view that plural marking on associates expresses plurality of nouns, I propose a hypothetical universal which uses the number of pluralized associates to predict plural marking on nouns. A data set collected from a sample of 100 languages is used to test the hypothetical universal, by employing the machine learning algorithm logistic regression.


Author(s):  
Nisha Yadav ◽  
Kakoli Banerjee ◽  
Vikram Bali

In the software industry, where the quality of the output is based on human performance, fatigue can be a reason for performance degradation. Fatigue not only degrades quality, but is also a health risk factor. Sleep disorders, depression, and stress are all results of fatigue which can contribute to fatal problems. This article presents a comparative study of different techniques which can be used for detecting fatigue of programmers and data miners who spent lots of time in front of a computer screen. Machine learning can used for worker fatigue detection also, but there are some factors which are specific for software workers. One of such factors is screen illumination. Screen illumination is the light of the computer screen or laptop screen that is casted on the workers face and makes it difficult for the machine learning algorithm to extract the facial features. This article presents a comparative study of the techniques which can be used for general fatigue detection and identifies the best techniques.


2019 ◽  
Vol 9 (15) ◽  
pp. 3037 ◽  
Author(s):  
Isaac Machorro-Cano ◽  
Giner Alor-Hernández ◽  
Mario Andrés Paredes-Valverde ◽  
Uriel Ramos-Deonati ◽  
José Luis Sánchez-Cervantes ◽  
...  

Overweight and obesity are affecting productivity and quality of life worldwide. The Internet of Things (IoT) makes it possible to interconnect, detect, identify, and process data between objects or services to fulfill a common objective. The main advantages of IoT in healthcare are the monitoring, analysis, diagnosis, and control of conditions such as overweight and obesity and the generation of recommendations to prevent them. However, the objects used in the IoT have limited resources, so it has become necessary to consider other alternatives to analyze the data generated from monitoring, analysis, diagnosis, control, and the generation of recommendations, such as machine learning. This work presents PISIoT: a machine learning and IoT-based smart health platform for the prevention, detection, treatment, and control of overweight and obesity, and other associated conditions or health problems. Weka API and the J48 machine learning algorithm were used to identify critical variables and classify patients, while Apache Mahout and RuleML were used to generate medical recommendations. Finally, to validate the PISIoT platform, we present a case study on the prevention of myocardial infarction in elderly patients with obesity by monitoring biomedical variables.


Sign in / Sign up

Export Citation Format

Share Document