Network sensor data acquistion in large-scale rocket motor testing

2001 ◽  
Author(s):  
Scott Sorenson ◽  
Bruce Swanson
2021 ◽  
Vol 13 (5) ◽  
pp. 168781402110131
Author(s):  
Junfeng Wu ◽  
Li Yao ◽  
Bin Liu ◽  
Zheyuan Ding ◽  
Lei Zhang

As more and more sensor data have been collected, automated detection, and diagnosis systems are urgently needed to lessen the increasing monitoring burden and reduce the risk of system faults. A plethora of researches have been done on anomaly detection, event detection, anomaly diagnosis respectively. However, none of current approaches can explore all these respects in one unified framework. In this work, a Multi-Task Learning based Encoder-Decoder (MTLED) which can simultaneously detect anomalies, diagnose anomalies, and detect events is proposed. In MTLED, feature matrix is introduced so that features are extracted for each time point and point-wise anomaly detection can be realized in an end-to-end way. Anomaly diagnosis and event detection share the same feature matrix with anomaly detection in the multi-task learning framework and also provide important information for system monitoring. To train such a comprehensive detection and diagnosis system, a large-scale multivariate time series dataset which contains anomalies of multiple types is generated with simulation tools. Extensive experiments on the synthetic dataset verify the effectiveness of MTLED and its multi-task learning framework, and the evaluation on a real-world dataset demonstrates that MTLED can be used in other application scenarios through transfer learning.


2018 ◽  
Vol 75 (5) ◽  
pp. 797-812 ◽  
Author(s):  
Beau Doherty ◽  
Samuel D.N. Johnson ◽  
Sean P. Cox

Bottom longline hook and trap fishing gear can potentially damage sensitive benthic areas (SBAs) in the ocean; however, the large-scale risks to these habitats are poorly understood because of the difficulties in mapping SBAs and in measuring the bottom-contact area of longline gear. In this paper, we describe a collaborative academic–industry–government approach to obtaining direct presence–absence data for SBAs and to measuring gear interactions with seafloor habitats via a novel deepwater trap camera and motion-sensing systems on commercial longline traps for sablefish (Anoplopoma fimbria) within SGaan Kinghlas – Bowie Seamount Marine Protected Area. We obtained direct presence–absence observations of cold-water corals (Alcyonacea, Antipatharia, Pennatulacea, Stylasteridae) and sponges (Hexactinellida, Demospongiae) at 92 locations over three commercial fishing trips. Video, accelerometer, and depth sensor data were used to estimate a mean bottom footprint of 53 m2 for a standard sablefish trap, which translates to 3200 m2 (95% CI = 2400–3900 m2) for a 60-trap commercial sablefish longline set. Our successful collaboration demonstrates how research partnerships with commercial fisheries have potential for massive improvements in the quantity and quality of data needed for conducting SBA risk assessments over large spatial and temporal scales.


2021 ◽  
Author(s):  
Arturo Magana-Mora ◽  
Mohammad AlJubran ◽  
Jothibasu Ramasamy ◽  
Mohammed AlBassam ◽  
Chinthaka Gooneratne ◽  
...  

Abstract Objective/Scope. Lost circulation events (LCEs) are among the top causes for drilling nonproductive time (NPT). The presence of natural fractures and vugular formations causes loss of drilling fluid circulation. Drilling depleted zones with incorrect mud weights can also lead to drilling induced losses. LCEs can also develop into additional drilling hazards, such as stuck pipe incidents, kicks, and blowouts. An LCE is traditionally diagnosed only when there is a reduction in mud volume in mud pits in the case of moderate losses or reduction of mud column in the annulus in total losses. Using machine learning (ML) for predicting the presence of a loss zone and the estimation of fracture parameters ahead is very beneficial as it can immediately alert the drilling crew in order for them to take the required actions to mitigate or cure LCEs. Methods, Procedures, Process. Although different computational methods have been proposed for the prediction of LCEs, there is a need to further improve the models and reduce the number of false alarms. Robust and generalizable ML models require a sufficiently large amount of data that captures the different parameters and scenarios representing an LCE. For this, we derived a framework that automatically searches through historical data, locates LCEs, and extracts the surface drilling and rheology parameters surrounding such events. Results, Observations, and Conclusions. We derived different ML models utilizing various algorithms and evaluated them using the data-split technique at the level of wells to find the most suitable model for the prediction of an LCE. From the model comparison, random forest classifier achieved the best results and successfully predicted LCEs before they occurred. The developed LCE model is designed to be implemented in the real-time drilling portal as an aid to the drilling engineers and the rig crew to minimize or avoid NPT. Novel/Additive Information. The main contribution of this study is the analysis of real-time surface drilling parameters and sensor data to predict an LCE from a statistically representative number of wells. The large-scale analysis of several wells that appropriately describe the different conditions before an LCE is critical for avoiding model undertraining or lack of model generalization. Finally, we formulated the prediction of LCEs as a time-series problem and considered parameter trends to accurately determine the early signs of LCEs.


IEEE Access ◽  
2015 ◽  
Vol 3 ◽  
pp. 2341-2351 ◽  
Author(s):  
Zhuofeng Zhao ◽  
Weilong Ding ◽  
Jianwu Wang ◽  
Yanbo Han

Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3273
Author(s):  
Lesong Zhou ◽  
Zheng Sheng ◽  
Qixiang Liao

In recent years, Thorpe analysis has been used to retrieve the characteristics of turbulence in free atmosphere from balloon-borne sensor data. However, previous studies have mainly focused on the mid-high latitude region, and this method is still rarely applied at heights above 30 km, especially above 35 km. Therefore, seven sets of upper air (>35 km) sounding data from the Changsha Sounding Station (28°12′ N, 113°05′ E), China are analyzed with Thorpe analysis in this article. It is noted that, in the troposphere, Thorpe analysis can better retrieve the turbulence distribution and the corresponding turbulence parameters. Also, because of the thicker troposphere at low latitudes, the values of the Thorpe scale L T and turbulent energy dissipation rate ε remain greater in a larger height range. In the stratosphere below the height of 35 km, the obtained ε is higher, and Thorpe analysis can only be used to analyze the characteristics of large-scale turbulence. In the stratosphere at a height of 35–40 km, because of the interference of sensor noise, Thorpe analysis can only help to retrieve the rough distribution position of large-scale turbulence, while it can hardly help with the calculation of the turbulence parameters.


Author(s):  
Ahmad Iwan Fadli ◽  
Selo Sulistyo ◽  
Sigit Wibowo

Traffic accident is a very difficult problem to handle on a large scale in a country. Indonesia is one of the most populated, developing countries that use vehicles for daily activities as its main transportation.  It is also the country with the largest number of car users in Southeast Asia, so driving safety needs to be considered. Using machine learning classification method to determine whether a driver is driving safely or not can help reduce the risk of driving accidents. We created a detection system to classify whether the driver is driving safely or unsafely using trip sensor data, which include Gyroscope, Acceleration, and GPS. The classification methods used in this study are Random Forest (RF) classification algorithm, Support Vector Machine (SVM), and Multilayer Perceptron (MLP) by improving data preprocessing using feature extraction and oversampling methods. This study shows that RF has the best performance with 98% accuracy, 98% precision, and 97% sensitivity using the proposed preprocessing stages compared to SVM or MLP.


Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 2792 ◽  
Author(s):  
Hyunseok Kim ◽  
Dongjun Suh

A hybrid particle swarm optimization (PSO), able to overcome the large-scale nonlinearity or heavily correlation in the data fusion model of multiple sensing information, is proposed in this paper. In recent smart convergence technology, multiple similar and/or dissimilar sensors are widely used to support precisely sensing information from different perspectives, and these are integrated with data fusion algorithms to get synergistic effects. However, the construction of the data fusion model is not trivial because of difficulties to meet under the restricted conditions of a multi-sensor system such as its limited options for deploying sensors and nonlinear characteristics, or correlation errors of multiple sensors. This paper presents a hybrid PSO to facilitate the construction of robust data fusion model based on neural network while ensuring the balance between exploration and exploitation. The performance of the proposed model was evaluated by benchmarks composed of representative datasets. The well-optimized data fusion model is expected to provide an enhancement in the synergistic accuracy.


Author(s):  
Joaquin Vanschoren ◽  
Ugo Vespier ◽  
Shengfa Miao ◽  
Marvin Meeng ◽  
Ricardo Cachucho ◽  
...  

Sensors are increasingly being used to monitor the world around us. They measure movements of structures such as bridges, windmills, and plane wings, human’s vital signs, atmospheric conditions, and fluctuations in power and water networks. In many cases, this results in large networks with different types of sensors, generating impressive amounts of data. As the volume and complexity of data increases, their effective use becomes more challenging, and novel solutions are needed both on a technical as well as a scientific level. Founded on several real-world applications, this chapter discusses the challenges involved in large-scale sensor data analysis and describes practical solutions to address them. Due to the sheer size of the data and the large amount of computation involved, these are clearly “Big Data” applications.


Author(s):  
Pattabiraman V. ◽  
Parvathi R.

Natural data erupting directly out of various data sources, such as text, image, video, audio, and sensor data, comes with an inherent property of having very large dimensions or features of the data. While these features add richness and perspectives to the data, due to sparsity associated with them, it adds to the computational complexity while learning, unable to visualize and interpret them, thus requiring large scale computational power to make insights out of it. This is famously called “curse of dimensionality.” This chapter discusses the methods by which curse of dimensionality is cured using conventional methods and analyzes its performance for given complex datasets. It also discusses the advantages of nonlinear methods over linear methods and neural networks, which could be a better approach when compared to other nonlinear methods. It also discusses future research areas such as application of deep learning techniques, which can be applied as a cure for this curse.


Sign in / Sign up

Export Citation Format

Share Document