Determining Rig State from Computer Vision Analytics

2021 ◽  
Author(s):  
Crispin Chatar ◽  
Suhas Suresha ◽  
Laetitia Shao ◽  
Soumya Gupta ◽  
Indranil Roychoudhury

Abstract For years, many companies involved with drilling have searched for the ideal method to calculate the state of a drilling rig. While companies cannot agree on a standard definition of "rig state," they can agree that as we move forward in drilling optimization and with further use of remote operations and automation, that rig state calculation is mandatory in one form or the other. Internally in the service company, many methods exist for calculating rig state, but one new technology area holds promise to deliver a more efficient and cost-effective option with higher accuracy. This technology involves vision analytics. Currently, detection algorithms rely heavily on data collected by sensors installed on the rig. However, relying exclusively on sensor data is problematic because sensors are prone to failure and are expensive to maintain and install. By proposing a machine learning model that relies exclusively on videos collected on the rig floor to infer rig states, it is possible to move away from the existing methods as the industry moves to a future of high-tech rigs. Videos, in contrast to sensor data, are relatively easy to collect from small inexpensive cameras installed at strategic locations. Consequently, this paper presents machine learning pipeline that is implemented to perform rig state determination from videos captured on the rig floor of an operating rig. The pipeline can be described in two parts. Firstly, the annotation pipeline matches each frame of the video dataset to a rig state. A convolutional neural network (CNN) is used to match the time of the video with corresponding sensor data. Secondly, additional CNNs are trained, capturing both spatial and temporal information, to extract an estimation of rig state from videos. The models are trained on a dataset of 3 million frames on a cloud platform using graphics processing units (GPU). Some of the models used include a pretrained visual geometry group (VGG) network, a convolutional three-dimensional (C3D) model that used three-dimensional (3D) convolutions, and a two-stream model that uses optical flow to capture temporal information. The initial results demonstrate this pipeline to be effective in detecting rig states using computer vision analytics.

2020 ◽  
Vol 10 (14) ◽  
pp. 4959
Author(s):  
Reda Belaiche ◽  
Yu Liu ◽  
Cyrille Migniot ◽  
Dominique Ginhac ◽  
Fan Yang

Micro-Expression (ME) recognition is a hot topic in computer vision as it presents a gateway to capture and understand daily human emotions. It is nonetheless a challenging problem due to ME typically being transient (lasting less than 200 ms) and subtle. Recent advances in machine learning enable new and effective methods to be adopted for solving diverse computer vision tasks. In particular, the use of deep learning techniques on large datasets outperforms classical approaches based on classical machine learning which rely on hand-crafted features. Even though available datasets for spontaneous ME are scarce and much smaller, using off-the-shelf Convolutional Neural Networks (CNNs) still demonstrates satisfactory classification results. However, these networks are intense in terms of memory consumption and computational resources. This poses great challenges when deploying CNN-based solutions in many applications, such as driver monitoring and comprehension recognition in virtual classrooms, which demand fast and accurate recognition. As these networks were initially designed for tasks of different domains, they are over-parameterized and need to be optimized for ME recognition. In this paper, we propose a new network based on the well-known ResNet18 which we optimized for ME classification in two ways. Firstly, we reduced the depth of the network by removing residual layers. Secondly, we introduced a more compact representation of optical flow used as input to the network. We present extensive experiments and demonstrate that the proposed network obtains accuracies comparable to the state-of-the-art methods while significantly reducing the necessary memory space. Our best classification accuracy was 60.17% on the challenging composite dataset containing five objectives classes. Our method takes only 24.6 ms for classifying a ME video clip (less than the occurrence time of the shortest ME which lasts 40 ms). Our CNN design is suitable for real-time embedded applications with limited memory and computing resources.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6354
Author(s):  
Aimi Aznan ◽  
Claudia Gonzalez Viejo ◽  
Alexis Pang ◽  
Sigfredo Fuentes

Rice quality assessment is essential for meeting high-quality standards and consumer demands. However, challenges remain in developing cost-effective and rapid techniques to assess commercial rice grain quality traits. This paper presents the application of computer vision (CV) and machine learning (ML) to classify commercial rice samples based on dimensionless morphometric parameters and color parameters extracted using CV algorithms from digital images obtained from a smartphone camera. The artificial neural network (ANN) model was developed using nine morpho-colorimetric parameters to classify rice samples into 15 commercial rice types. Furthermore, the ANN models were deployed and evaluated on a different imaging system to simulate their practical applications under different conditions. Results showed that the best classification accuracy was obtained using the Bayesian Regularization (BR) algorithm of the ANN with ten hidden neurons at 91.6% (MSE = <0.01) and 88.5% (MSE = 0.01) for the training and testing stages, respectively, with an overall accuracy of 90.7% (Model 2). Deployment also showed high accuracy (93.9%) in the classification of the rice samples. The adoption by the industry of rapid, reliable, and accurate methods, such as those presented here, may allow the incorporation of different morpho-colorimetric traits in rice with consumer perception studies.


SPE Journal ◽  
2012 ◽  
Vol 17 (03) ◽  
pp. 752-767 ◽  
Author(s):  
Hai Hoang ◽  
Jagannathan Mahadevan ◽  
Henry Lopez

Summary Tight gas plays often have multiple lenses of producing formations. Multizone fracturing or limited-entry fracturing is a cost-effective method to complete and produce tight gas wells in these layered reservoirs. The rate and volume of fracturing fluid injected into the different layers have an important role in determining the fracture characteristics. However, because of the spatial restriction of downhole conditions, it is very challenging to obtain a specific injection rate for each perforated zone. Temperature variations in the wellbore, outside of the casing, are available with new technology such as distributed-temperaturesensor (DTS) fiber-optic cables. The main objective of this study is to relate the wellbore-temperature changes as measured by DTS data to the wellbore and fractured-interval injection rates during a multizone fracturing process. We develop a forward simulation model on the basis of mass and energy conservation for calculating the temperature profile and temperature history in the wellbore and in the rock surrounding the wellbore. The model allows for liquid flow into the fractured interval. Subsequently, the model is integrated with an inverse-estimation algorithm, which is used to estimate flow rates both in the wellbore and into the fractured interval. The estimation algorithm is based on a gradient search method. A distinguishing feature of this work is the development of a radial model used to represent the temperature evolution in the near-wellbore region. The higher order allows accurate calculation of the temperature in the wellbore while still capturing the fluid-flow and heat-transport aspects of the hydraulic-fracture propagation. Our estimation results show a good comparison between the calculated temperature profiles and those observed in the field with DTS. Also, the model is able to estimate a flow-rate history consistent with total field-injection volume. This work enables an accurate and quick interpretation of the wellbore DTS data to determine the interval injection rates during a hydraulic-fracturing process. Knowledge of accurate interval injection rates and the corresponding fracture characteristics can be useful in designing a better limited-entry completion that can optimize the fracture length by use of rate control and/or fluid diversion.


Sensors ◽  
2020 ◽  
Vol 21 (1) ◽  
pp. 64
Author(s):  
Glenn Healey

Evaluating a player’s talent level based on batted balls is one of the most important and difficult tasks facing baseball analysts. An array of sensors has been installed in Major League Baseball stadiums that capture seven terabytes of data during each game. These data increase interest among spectators, but also can be used to quantify the performances of players on the field. The weighted on base average cube model has been used to generate reliable estimates of batter performance using measured batted-ball parameters, but research has shown that running speed is also a determinant of batted-ball performance. In this work, we used machine learning methods to combine a three-dimensional batted-ball vector measured by Doppler radar with running speed measurements generated by stereoscopic optical sensors. We show that this process leads to an improved model for the batted-ball performances of players.


2011 ◽  
Vol 317-319 ◽  
pp. 956-961 ◽  
Author(s):  
Jiang Bo Li ◽  
Xiu Qin Rao ◽  
Yi Bin Ying

Computer vision is a rapid, consistent and objective inspection technique, which has expanded into many diverse industries. Its speed and accuracy provide one alternative for an automated, non-destructive and cost-effective technique to accomplish ever-increasing production and quality requirements. This method of inspection has found applications in the agricultural industry, including the inspection and grading of fruits. This paper provides an introduction to main defection and grading approaches of fruit external defects, including image processing and pattern recognition methods based on fruit two-dimensional (2D) and three-dimensional (3D) information, and hyperspectral and multispectral imaging. In addition, their advantages and disadvantages are also discussed.


2021 ◽  
Vol 5 (1) ◽  
pp. 60-72
Author(s):  
Mohammed Yaseen Taha ◽  
Qahhar Muhammad Qadir

With the advent of Industry 4.0, the trend of its implementation in current factories has increased tremendously. Using autonomous mobile robots that are capable of navigating and handling material in a warehouse is one of the important pillars to convert the current warehouse inventory control to more automated and smart processes to be aligned with Industry 4.0 needs. Navigating a robot’s indoor positioning in addition to finding materials are examples of location-based services (LBS), and are some major aspects of Industry 4.0 implementation in warehouses that should be considered. Global positioning satellites (GPS) are accurate and reliable for outdoor navigation and positioning while they are not suitable for indoor use. Indoor positioning systems (IPS) have been proposed in order to overcome this shortcoming and extend this valuable service to indoor navigation and positioning. This paper proposes a simple, cost effective and easily configurable indoor navigation system with the help of an optical path following, unmanned ground vehicle (UGV) robot augmented by image processing and computer vision deep machine learning algorithms. The proposed system prototype is capable of navigating in a warehouse as an example of an indoor area, by tracking and following a predefined traced path that covers all inventory zones in a warehouse, through the usage of infrared reflective sensors that can detect black traced path lines on bright ground. As metionded before, this general navigation mechanism is augmented and enhanced by artificial intelligence (AI) computer vision tasks to be able to select the path to the required inventory zone as its destination, and locate the requested material within this inventory zone. The adopted AI computer vision tasks that are used in the proposed prototype are deep machine learning object recognition algorithms for path selection and quick response (QR) detection.


2014 ◽  
Vol 4 (1) ◽  
pp. 23-29
Author(s):  
Constance Hilory Tomberlin

There are a multitude of reasons that a teletinnitus program can be beneficial, not only to the patients, but also within the hospital and audiology department. The ability to use technology for the purpose of tinnitus management allows for improved appointment access for all patients, especially those who live at a distance, has been shown to be more cost effective when the patients travel is otherwise monetarily compensated, and allows for multiple patient's to be seen in the same time slots, allowing for greater access to the clinic for the patients wishing to be seen in-house. There is also the patient's excitement in being part of a new technology-based program. The Gulf Coast Veterans Health Care System (GCVHCS) saw the potential benefits of incorporating a teletinnitus program and began implementation in 2013. There were a few hurdles to work through during the beginning organizational process and the initial execution of the program. Since the establishment of the Teletinnitus program, the GCVHCS has seen an enhancement in patient care, reduction in travel compensation, improvement in clinic utilization, clinic availability, the genuine excitement of the use of a new healthcare media amongst staff and patients, and overall patient satisfaction.


2020 ◽  
Author(s):  
Nalika Ulapane ◽  
Karthick Thiyagarajan ◽  
sarath kodagoda

<div>Classification has become a vital task in modern machine learning and Artificial Intelligence applications, including smart sensing. Numerous machine learning techniques are available to perform classification. Similarly, numerous practices, such as feature selection (i.e., selection of a subset of descriptor variables that optimally describe the output), are available to improve classifier performance. In this paper, we consider the case of a given supervised learning classification task that has to be performed making use of continuous-valued features. It is assumed that an optimal subset of features has already been selected. Therefore, no further feature reduction, or feature addition, is to be carried out. Then, we attempt to improve the classification performance by passing the given feature set through a transformation that produces a new feature set which we have named the “Binary Spectrum”. Via a case study example done on some Pulsed Eddy Current sensor data captured from an infrastructure monitoring task, we demonstrate how the classification accuracy of a Support Vector Machine (SVM) classifier increases through the use of this Binary Spectrum feature, indicating the feature transformation’s potential for broader usage.</div><div><br></div>


Sign in / Sign up

Export Citation Format

Share Document