scholarly journals A Hybrid Processing System for Large-Scale Traffic Sensor Data

IEEE Access ◽  
2015 ◽  
Vol 3 ◽  
pp. 2341-2351 ◽  
Author(s):  
Zhuofeng Zhao ◽  
Weilong Ding ◽  
Jianwu Wang ◽  
Yanbo Han
2021 ◽  
Vol 48 (3) ◽  
pp. 128-129
Author(s):  
Sounak Kar ◽  
Robin Rehrmann ◽  
Arpan Mukhopadhyay ◽  
Bastian Alt ◽  
Florin Ciucu ◽  
...  

We analyze a data-processing system with n clients producing jobs which are processed in batches by m parallel servers; the system throughput critically depends on the batch size and a corresponding sub-additive speedup function that arises due to overhead amortization. In practice, throughput optimization relies on numerical searches for the optimal batch size which is computationally cumbersome. In this paper, we model this system in terms of a closed queueing network assuming certain forms of service speedup; a standard Markovian analysis yields the optimal throughput in w n4 time. Our main contribution is a mean-field model that has a unique, globally attractive stationary point, derivable in closed form. This point characterizes the asymptotic throughput as a function of the batch size that can be calculated in O(1) time. Numerical settings from a large commercial system reveal that this asymptotic optimum is accurate in practical finite regimes.


2021 ◽  
Vol 13 (5) ◽  
pp. 168781402110131
Author(s):  
Junfeng Wu ◽  
Li Yao ◽  
Bin Liu ◽  
Zheyuan Ding ◽  
Lei Zhang

As more and more sensor data have been collected, automated detection, and diagnosis systems are urgently needed to lessen the increasing monitoring burden and reduce the risk of system faults. A plethora of researches have been done on anomaly detection, event detection, anomaly diagnosis respectively. However, none of current approaches can explore all these respects in one unified framework. In this work, a Multi-Task Learning based Encoder-Decoder (MTLED) which can simultaneously detect anomalies, diagnose anomalies, and detect events is proposed. In MTLED, feature matrix is introduced so that features are extracted for each time point and point-wise anomaly detection can be realized in an end-to-end way. Anomaly diagnosis and event detection share the same feature matrix with anomaly detection in the multi-task learning framework and also provide important information for system monitoring. To train such a comprehensive detection and diagnosis system, a large-scale multivariate time series dataset which contains anomalies of multiple types is generated with simulation tools. Extensive experiments on the synthetic dataset verify the effectiveness of MTLED and its multi-task learning framework, and the evaluation on a real-world dataset demonstrates that MTLED can be used in other application scenarios through transfer learning.


2018 ◽  
Vol 75 (5) ◽  
pp. 797-812 ◽  
Author(s):  
Beau Doherty ◽  
Samuel D.N. Johnson ◽  
Sean P. Cox

Bottom longline hook and trap fishing gear can potentially damage sensitive benthic areas (SBAs) in the ocean; however, the large-scale risks to these habitats are poorly understood because of the difficulties in mapping SBAs and in measuring the bottom-contact area of longline gear. In this paper, we describe a collaborative academic–industry–government approach to obtaining direct presence–absence data for SBAs and to measuring gear interactions with seafloor habitats via a novel deepwater trap camera and motion-sensing systems on commercial longline traps for sablefish (Anoplopoma fimbria) within SGaan Kinghlas – Bowie Seamount Marine Protected Area. We obtained direct presence–absence observations of cold-water corals (Alcyonacea, Antipatharia, Pennatulacea, Stylasteridae) and sponges (Hexactinellida, Demospongiae) at 92 locations over three commercial fishing trips. Video, accelerometer, and depth sensor data were used to estimate a mean bottom footprint of 53 m2 for a standard sablefish trap, which translates to 3200 m2 (95% CI = 2400–3900 m2) for a 60-trap commercial sablefish longline set. Our successful collaboration demonstrates how research partnerships with commercial fisheries have potential for massive improvements in the quantity and quality of data needed for conducting SBA risk assessments over large spatial and temporal scales.


2014 ◽  
Vol 687-691 ◽  
pp. 3733-3737
Author(s):  
Dan Wu ◽  
Ming Quan Zhou ◽  
Rong Fang Bie

Massive image processing technology requires high requirements of processor and memory, and it needs to adopt high performance of processor and the large capacity memory. While the single or single core processing and traditional memory can’t satisfy the need of image processing. This paper introduces the cloud computing function into the massive image processing system. Through the cloud computing function it expands the virtual space of the system, saves computer resources and improves the efficiency of image processing. The system processor uses multi-core DSP parallel processor, and develops visualization parameter setting window and output results using VC software settings. Through simulation calculation we get the image processing speed curve and the system image adaptive curve. It provides the technical reference for the design of large-scale image processing system.


2021 ◽  
Author(s):  
Arturo Magana-Mora ◽  
Mohammad AlJubran ◽  
Jothibasu Ramasamy ◽  
Mohammed AlBassam ◽  
Chinthaka Gooneratne ◽  
...  

Abstract Objective/Scope. Lost circulation events (LCEs) are among the top causes for drilling nonproductive time (NPT). The presence of natural fractures and vugular formations causes loss of drilling fluid circulation. Drilling depleted zones with incorrect mud weights can also lead to drilling induced losses. LCEs can also develop into additional drilling hazards, such as stuck pipe incidents, kicks, and blowouts. An LCE is traditionally diagnosed only when there is a reduction in mud volume in mud pits in the case of moderate losses or reduction of mud column in the annulus in total losses. Using machine learning (ML) for predicting the presence of a loss zone and the estimation of fracture parameters ahead is very beneficial as it can immediately alert the drilling crew in order for them to take the required actions to mitigate or cure LCEs. Methods, Procedures, Process. Although different computational methods have been proposed for the prediction of LCEs, there is a need to further improve the models and reduce the number of false alarms. Robust and generalizable ML models require a sufficiently large amount of data that captures the different parameters and scenarios representing an LCE. For this, we derived a framework that automatically searches through historical data, locates LCEs, and extracts the surface drilling and rheology parameters surrounding such events. Results, Observations, and Conclusions. We derived different ML models utilizing various algorithms and evaluated them using the data-split technique at the level of wells to find the most suitable model for the prediction of an LCE. From the model comparison, random forest classifier achieved the best results and successfully predicted LCEs before they occurred. The developed LCE model is designed to be implemented in the real-time drilling portal as an aid to the drilling engineers and the rig crew to minimize or avoid NPT. Novel/Additive Information. The main contribution of this study is the analysis of real-time surface drilling parameters and sensor data to predict an LCE from a statistically representative number of wells. The large-scale analysis of several wells that appropriately describe the different conditions before an LCE is critical for avoiding model undertraining or lack of model generalization. Finally, we formulated the prediction of LCEs as a time-series problem and considered parameter trends to accurately determine the early signs of LCEs.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3273
Author(s):  
Lesong Zhou ◽  
Zheng Sheng ◽  
Qixiang Liao

In recent years, Thorpe analysis has been used to retrieve the characteristics of turbulence in free atmosphere from balloon-borne sensor data. However, previous studies have mainly focused on the mid-high latitude region, and this method is still rarely applied at heights above 30 km, especially above 35 km. Therefore, seven sets of upper air (>35 km) sounding data from the Changsha Sounding Station (28°12′ N, 113°05′ E), China are analyzed with Thorpe analysis in this article. It is noted that, in the troposphere, Thorpe analysis can better retrieve the turbulence distribution and the corresponding turbulence parameters. Also, because of the thicker troposphere at low latitudes, the values of the Thorpe scale L T and turbulent energy dissipation rate ε remain greater in a larger height range. In the stratosphere below the height of 35 km, the obtained ε is higher, and Thorpe analysis can only be used to analyze the characteristics of large-scale turbulence. In the stratosphere at a height of 35–40 km, because of the interference of sensor noise, Thorpe analysis can only help to retrieve the rough distribution position of large-scale turbulence, while it can hardly help with the calculation of the turbulence parameters.


1993 ◽  
Vol 2 (4) ◽  
pp. 133-144 ◽  
Author(s):  
Jon B. Weissman ◽  
Andrew S. Grimshaw ◽  
R.D. Ferraro

The conventional wisdom in the scientific computing community is that the best way to solve large-scale numerically intensive scientific problems on today's parallel MIMD computers is to use Fortran or C programmed in a data-parallel style using low-level message-passing primitives. This approach inevitably leads to nonportable codes and extensive development time, and restricts parallel programming to the domain of the expert programmer. We believe that these problems are not inherent to parallel computing but are the result of the programming tools used. We will show that comparable performance can be achieved with little effort if better tools that present higher level abstractions are used. The vehicle for our demonstration is a 2D electromagnetic finite element scattering code we have implemented in Mentat, an object-oriented parallel processing system. We briefly describe the application. Mentat, the implementation, and present performance results for both a Mentat and a hand-coded parallel Fortran version.


Sign in / Sign up

Export Citation Format

Share Document