scholarly journals Digital Triplet Approach for Real-Time Monitoring and Control of an Elevator Security System

Designs ◽  
2020 ◽  
Vol 4 (2) ◽  
pp. 9 ◽  
Author(s):  
Michael M. Gichane ◽  
Jean B. Byiringiro ◽  
Andrew K. Chesang ◽  
Peterson M. Nyaga ◽  
Rogers K. Langat ◽  
...  

As Digital Twins gain more traction and their adoption in industry increases, there is a need to integrate such technology with machine learning features to enhance functionality and enable decision making tasks. This has lead to the emergence of a concept known as Digital Triplet; an enhancement of Digital Twin technology through the addition of an ’intelligent activity layer’. This is a relatively new technology in Industrie 4.0 and research efforts are geared towards exploring its applicability, development and testing of means for implementation and quick adoption. This paper presents the design and implementation of a Digital Triplet for a three-floor elevator system. It demonstrates the integration of a machine learning (ML) object detection model and the system Digital Twin. This was done to introduce an additional security feature that enabled the system to make a decision, based on objects detected and take preliminary security measures. The virtual model was designed in Siemens NX and programmed via Total Integrated Automation (TIA) portal software. The corresponding physical model was fabricated and controlled using a Programmable Logic Controller (PLC) S7 1200. A control program was developed to mimic the general operations of a typical elevator system used in a commercial building setting. Communication, between the physical and virtual models, was enabled using the OPC-Unified Architecture (OPC-UA) protocol. Object recognition using “You only look once” (YOLOV3) based machine learning algorithm was incorporated. The Digital Triplet’s functionality was tested, ensuring the virtual system duplicated actual operations of the physical counterpart through the use of sensor data. Performance testing was done to determine the impact of the ML module on the real-time functionality aspect of the system. Experiment results showed the object recognition contributed an average of 1.083 s to an overall signal travel time of 1.338 s.

Author(s):  
Negin Yousefpour ◽  
Steve Downie ◽  
Steve Walker ◽  
Nathan Perkins ◽  
Hristo Dikanski

Bridge scour is a challenge throughout the U.S.A. and other countries. Despite the scale of the issue, there is still a substantial lack of robust methods for scour prediction to support reliable, risk-based management and decision making. Throughout the past decade, the use of real-time scour monitoring systems has gained increasing interest among state departments of transportation across the U.S.A. This paper introduces three distinct methodologies for scour prediction using advanced artificial intelligence (AI)/machine learning (ML) techniques based on real-time scour monitoring data. Scour monitoring data included the riverbed and river stage elevation time series at bridge piers gathered from various sources. Deep learning algorithms showed promising in prediction of bed elevation and water level variations as early as a week in advance. Ensemble neural networks proved successful in the predicting the maximum upcoming scour depth, using the observed sensor data at the onset of a scour episode, and based on bridge pier, flow and riverbed characteristics. In addition, two of the common empirical scour models were calibrated based on the observed sensor data using the Bayesian inference method, showing significant improvement in prediction accuracy. Overall, this paper introduces a novel approach for scour risk management by integrating emerging AI/ML algorithms with real-time monitoring systems for early scour forecast.


2021 ◽  
Vol 11 (10) ◽  
pp. 4602
Author(s):  
Farzin Piltan ◽  
Jong-Myon Kim

In this study, the application of an intelligent digital twin integrated with machine learning for bearing anomaly detection and crack size identification will be observed. The intelligent digital twin has two main sections: signal approximation and intelligent signal estimation. The mathematical vibration bearing signal approximation is integrated with machine learning-based signal approximation to approximate the bearing vibration signal in normal conditions. After that, the combination of the Kalman filter, high-order variable structure technique, and adaptive neural-fuzzy technique is integrated with the proposed signal approximation technique to design an intelligent digital twin. Next, the residual signals will be generated using the proposed intelligent digital twin and the original RAW signals. The machine learning approach will be integrated with the proposed intelligent digital twin for the classification of the bearing anomaly and crack sizes. The Case Western Reserve University bearing dataset is used to test the impact of the proposed scheme. Regarding the experimental results, the average accuracy for the bearing fault pattern recognition and crack size identification will be, respectively, 99.5% and 99.6%.


2021 ◽  
Author(s):  
Arturo Magana-Mora ◽  
Mohammad AlJubran ◽  
Jothibasu Ramasamy ◽  
Mohammed AlBassam ◽  
Chinthaka Gooneratne ◽  
...  

Abstract Objective/Scope. Lost circulation events (LCEs) are among the top causes for drilling nonproductive time (NPT). The presence of natural fractures and vugular formations causes loss of drilling fluid circulation. Drilling depleted zones with incorrect mud weights can also lead to drilling induced losses. LCEs can also develop into additional drilling hazards, such as stuck pipe incidents, kicks, and blowouts. An LCE is traditionally diagnosed only when there is a reduction in mud volume in mud pits in the case of moderate losses or reduction of mud column in the annulus in total losses. Using machine learning (ML) for predicting the presence of a loss zone and the estimation of fracture parameters ahead is very beneficial as it can immediately alert the drilling crew in order for them to take the required actions to mitigate or cure LCEs. Methods, Procedures, Process. Although different computational methods have been proposed for the prediction of LCEs, there is a need to further improve the models and reduce the number of false alarms. Robust and generalizable ML models require a sufficiently large amount of data that captures the different parameters and scenarios representing an LCE. For this, we derived a framework that automatically searches through historical data, locates LCEs, and extracts the surface drilling and rheology parameters surrounding such events. Results, Observations, and Conclusions. We derived different ML models utilizing various algorithms and evaluated them using the data-split technique at the level of wells to find the most suitable model for the prediction of an LCE. From the model comparison, random forest classifier achieved the best results and successfully predicted LCEs before they occurred. The developed LCE model is designed to be implemented in the real-time drilling portal as an aid to the drilling engineers and the rig crew to minimize or avoid NPT. Novel/Additive Information. The main contribution of this study is the analysis of real-time surface drilling parameters and sensor data to predict an LCE from a statistically representative number of wells. The large-scale analysis of several wells that appropriately describe the different conditions before an LCE is critical for avoiding model undertraining or lack of model generalization. Finally, we formulated the prediction of LCEs as a time-series problem and considered parameter trends to accurately determine the early signs of LCEs.


2016 ◽  
Author(s):  
Bethany Signal ◽  
Brian S Gloss ◽  
Marcel E Dinger ◽  
Timothy R Mercer

ABSTRACTBackgroundThe branchpoint element is required for the first lariat-forming reaction in splicing. However due to difficulty in experimentally mapping at a genome-wide scale, current catalogues are incomplete.ResultsWe have developed a machine-learning algorithm trained with empirical human branchpoint annotations to identify branchpoint elements from primary genome sequence alone. Using this approach, we can accurately locate branchpoints elements in 85% of introns in current gene annotations. Consistent with branchpoints as basal genetic elements, we find our annotation is unbiased towards gene type and expression levels. A major fraction of introns was found to encode multiple branchpoints raising the prospect that mutational redundancy is encoded in key genes. We also confirmed all deleterious branchpoint mutations annotated in clinical variant databases, and further identified thousands of clinical and common genetic variants with similar predicted effects.ConclusionsWe propose the broad annotation of branchpoints constitutes a valuable resource for further investigations into the genetic encoding of splicing patterns, and interpreting the impact of common- and disease-causing human genetic variation on gene splicing.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Hanlin Liu ◽  
Linqiang Yang ◽  
Linchao Li

A variety of climate factors influence the precision of the long-term Global Navigation Satellite System (GNSS) monitoring data. To precisely analyze the effect of different climate factors on long-term GNSS monitoring records, this study combines the extended seven-parameter Helmert transformation and a machine learning algorithm named Extreme Gradient boosting (XGboost) to establish a hybrid model. We established a local-scale reference frame called stable Puerto Rico and Virgin Islands reference frame of 2019 (PRVI19) using ten continuously operating long-term GNSS sites located in the rigid portion of the Puerto Rico and Virgin Islands (PRVI) microplate. The stability of PRVI19 is approximately 0.4 mm/year and 0.5 mm/year in the horizontal and vertical directions, respectively. The stable reference frame PRVI19 can avoid the risk of bias due to long-term plate motions when studying localized ground deformation. Furthermore, we applied the XGBoost algorithm to the postprocessed long-term GNSS records and daily climate data to train the model. We quantitatively evaluated the importance of various daily climate factors on the GNSS time series. The results show that wind is the most influential factor with a unit-less index of 0.013. Notably, we used the model with climate and GNSS records to predict the GNSS-derived displacements. The results show that the predicted displacements have a slightly lower root mean square error compared to the fitted results using spline method (prediction: 0.22 versus fitted: 0.31). It indicates that the proposed model considering the climate records has the appropriate predict results for long-term GNSS monitoring.


Sensors ◽  
2019 ◽  
Vol 19 (16) ◽  
pp. 3491 ◽  
Author(s):  
Issam Hammad ◽  
Kamal El-Sankary

Accuracy evaluation in machine learning is based on the split of data into a training set and a test set. This critical step is applied to develop machine learning models including models based on sensor data. For sensor-based problems, comparing the accuracy of machine learning models using the train/test split provides only a baseline comparison in ideal situations. Such comparisons won’t consider practical production problems that can impact the inference accuracy such as the sensors’ thermal noise, performance with lower inference quantization, and tolerance to sensor failure. Therefore, this paper proposes a set of practical tests that can be applied when comparing the accuracy of machine learning models for sensor-based problems. First, the impact of the sensors’ thermal noise on the models’ inference accuracy was simulated. Machine learning algorithms have different levels of error resilience to thermal noise, as will be presented. Second, the models’ accuracy using lower inference quantization was compared. Lowering inference quantization leads to lowering the analog-to-digital converter (ADC) resolution which is cost-effective in embedded designs. Moreover, in custom designs, analog-to-digital converters’ (ADCs) effective number of bits (ENOB) is usually lower than the ideal number of bits due to various design factors. Therefore, it is practical to compare models’ accuracy using lower inference quantization. Third, the models’ accuracy tolerance to sensor failure was evaluated and compared. For this study, University of California Irvine (UCI) ‘Daily and Sports Activities’ dataset was used to present these practical tests and their impact on model selection.


Sensors ◽  
2019 ◽  
Vol 19 (14) ◽  
pp. 3115 ◽  
Author(s):  
Yang Wei ◽  
Hao Wang ◽  
Kim Fung Tsang ◽  
Yucheng Liu ◽  
Chung Kit Wu ◽  
...  

Improperly grown trees may cause huge hazards to the environment and to humans, through e.g., climate change, soil erosion, etc. A proximity environmental feature-based tree health assessment (PTA) scheme is proposed to prevent these hazards by providing guidance for early warning methods of potential poor tree health. In PTA development, tree health is defined and evaluated based on proximity environmental features (PEFs). The PEF takes into consideration the seven surrounding ambient features that strongly impact tree health. The PEFs were measured by the deployed smart sensors surrounding trees. A database composed of tree health and relative PEFs was established for further analysis. An adaptive data identifying (ADI) algorithm is applied to exclude the influence of interference factors in the database. Finally, the radial basis function (RBF) neural network (NN), a machine leaning algorithm, has been identified as the appropriate tool with which to correlate tree health and PEFs to establish the PTA algorithm. One of the salient features of PTA is that the algorithm can evaluate, and thus monitor, tree health remotely and automatically from smart sensor data by taking advantage of the well-established internet of things (IoT) network and machine learning algorithm.


Sign in / Sign up

Export Citation Format

Share Document