Marcellus and Haynesville Drilling Data: Analysis and Lessons Learned

Author(s):  
Quan Guo ◽  
Lujun Ji ◽  
Vusal Rajabov ◽  
James E. Friedheim ◽  
Rhonna Wu
2021 ◽  
Author(s):  
Vallet Laurent ◽  
Gutarov Pavel ◽  
Chevallier Bertrand ◽  
Converset Julien ◽  
Paterson Graeme ◽  
...  

Abstract In the current economic environment, delivering wells on time and on budget is paramount. Well construction is a significant cost of any field development and it is more important than ever to minimize these costs and to avoid unnecessary lost time and non-productive time. Invisible lost time and non-productive time can represent as much as 40% of the cost of well construction and can lead to more severe issues such as delaying first oil, losing the well or environmental impact. There has been much work developing systems to optimize well construction, but the industry still fails to routinely detect and avoid problematic events such as stuck pipe, kicks, losses and washouts. Standardizing drilling practice can help also to improve the efficiency, this practice has shown a 30% cost reduction through repetitive and systematic practices, automation becomes the key process to realize it and Machine Learning introduced by new technologies is the key to achieve it. Drilling data analysis is key to understanding reasons for bad performances and detecting at an early stage potential downhole events. It can be done efficiently to provide to the user tools to look at the well construction process in its whole instead of looking at the last few hours as it is done at the rig site. In order to analyze the drilling data, it is necessary to have access to reliable data in Real-Time to compare with a data model considering the context (BHA, fluids, well geometry). Well planning, including multi-well offset analysis of risks, drilling processes and geology enables a user to look at the full well construction process and define levels of automation. This paper applies machine learning to a post multi-well analysis of a deepwater field development known for its drilling challenges. Minimizing the human input through automation allowed us to compare offset wells and to define the root cause for non-productive time. In our case study an increase of the pressure while drilling should have led to immediate mitigation measures to avoid a wiper trip. This paper presents techniques used to systematize surface data analysis and a workflow to identify at an early stage a near pack off which was spotted in an automatic way. The application of this process during operations could have achieved a 10%-time reduction of the section 12 ¼’’.


2018 ◽  
Author(s):  
Zhenyu Chen ◽  
Allen Lo ◽  
Maria Neves Carrasquilla ◽  
Zhiguo Zhao ◽  
Tanveer Shahid

2020 ◽  
Vol 19 ◽  
pp. 160940692096870
Author(s):  
Lindsay Giesen ◽  
Allison Roeser

Improvements to qualitative data analysis software (QDAS) have both facilitated and complicated the qualitative research process. This technology allows us to work with a greater volume of data than ever before, but the increased volume of data frequently requires a large team to process and code. This paper presents insights on how to successfully structure and manage a team of staff in coding qualitative data. We draw on our experience in team-based coding of 154 interview transcripts for a study of school meal programs. The team consisted of four coders, three senior reviewers, and a lead analyst and external qualitative methodologist who shepherded the coding process together. Lessons learned from this study include: 1) establish a strong and supportive management structure; 2) build skills gradually by breaking training and coding into “bite-sized” pieces; and 3) develop detailed reference materials to guide your coding team.


Author(s):  
Andrew Binet ◽  
Vedette Gavin ◽  
Leigh Carroll ◽  
Mariana Arcaya

One impediment to expanding the prevalence and quality of community-engaged research is a shortage of instructive resources for collaboratively designing research instruments and analyzing data with community members. This article describes how a consortium of community residents, grassroots community organizations, and academic and public institutions implemented collaborative research design and data analysis processes as part of a participatory action research (PAR) study investigating the relationship between neighborhoods and health in the greater Boston area. We report how nine different groups of community residents were engaged in developing a multi-dimensional survey instrument, generating and testing hypotheses, and interpreting descriptive statistics and preliminary findings. We conclude by reflecting on the importance of balancing planned strategies for building and sustaining resident engagement with improvisational facilitation that is responsive to residents’ characteristics, interests and needs in the design and execution of collaborative research design and data analysis processes.


1982 ◽  
Vol 12 (2) ◽  
pp. 181-190 ◽  
Author(s):  
David Royse ◽  
Stephen Keller ◽  
James L. Schwartz

In an evaluation of a mass drug education program involving over 1,000 students, a mental health funding body in southwestern Ohio learned a number of lessons which should prove useful to anyone engaging in, or planning to engage in, evaluations of drug education programs. Problems such as instrument selection, logistical constraints and data analysis are discussed. Suggestions are given as to how the present study could have been improved and recommendations for future evaluations are made.


Sign in / Sign up

Export Citation Format

Share Document