easyQuake: Putting Machine Learning to Work for Your Regional Seismic Network or Local Earthquake Study

2020 ◽  
Vol 92 (1) ◽  
pp. 555-563
Author(s):  
Jacob I. Walter ◽  
Paul Ogwari ◽  
Andrew Thiel ◽  
Fernando Ferrer ◽  
Isaac Woelfel

Abstract We developed a Python package—easyQuake—that consists of a flexible set of tools for detecting and locating earthquakes from International Federation of Digital Seismograph Networks-collected or field-collected seismograms. The package leverages a machine-learning driven phase picker, coupled with an associator, to produce a Quake Markup Language (QuakeML) style catalog complete with magnitudes and P-wave polarity determinations. We describe how nightly computations on day-long seismograms identify lower-magnitude candidate events that were otherwise missed due to cultural noise and how those events are incorporated into the Oklahoma Geological Survey statewide network upon analyst manual review. We discuss applications for the package, including earthquake detection for regional networks and microseismicity studies in arbitrary user-defined regions. Because the fundamentals of the package are scale invariant, it has wide application to seismological earthquake analysis from regional to local arrays and has great potential for identifying early aftershocks that are otherwise missed. The package is fast and reliable; the computations are relatively efficient across a range of hardware, and we have encountered very few (∼1%) false positive event detections for the Oklahoma case study. The utility and novelty of the package is the turnkey earthquake analysis with QuakeML file output, which can be dropped directly into existing real-time earthquake analysis systems. We have designed the functions to be quite modular so that a user could replace the provided picker or associator with one of their choosing. The Python package is open source and development continues.

2021 ◽  
Vol 13 (1) ◽  
pp. 1084-1104
Author(s):  
Sayed S. R. Moustafa ◽  
Gad-Elkareem A. Mohamed ◽  
Mohamed Metwaly

Abstract This research presents a new approach which addresses the conversion of earthquake magnitude as a supervised machine-learning problem through a multistage approach. First, the moment magnitude (M w) calculations were extended to lower magnitude earthquakes using the spectral P-wave analyses of the vertical component seismograms to improve the scaling relation of M w and the local magnitude (M L) of 138 earthquakes in northeastern Egypt. Second, using unsupervised clustering and regression analysis, we applied the k-means clustering technique to subdivide the mapped area into multiple seismic activity zones. This clustering phase created five spatially close seismic areas for training regression algorithms. Supervised regression analysis of each seismic area was simpler and more accurate. Conversion relations between M w and M L were calculated by linear regression, general orthogonal regression (GOR), and random sample consensus (RANSAC) regression techniques. RANSAC and GOR produced better results than linear regression, which provides evidence for the effects of outliers on regression accuracy. Moreover, the overall multistage hybrid approach produced substantial improvements in the measured-predicted dataset residuals when individual seismic zones rather than all datasets were considered. In 90% of the analyzed cases, M w values could be regarded as M L values within 0.2 magnitude units. Moreover, predicted magnitude conversion relations in the current study corresponded well to magnitude conversion relations in other seismogenic areas of Egypt.


i-com ◽  
2021 ◽  
Vol 20 (1) ◽  
pp. 19-32
Author(s):  
Daniel Buschek ◽  
Charlotte Anlauff ◽  
Florian Lachner

Abstract This paper reflects on a case study of a user-centred concept development process for a Machine Learning (ML) based design tool, conducted at an industry partner. The resulting concept uses ML to match graphical user interface elements in sketches on paper to their digital counterparts to create consistent wireframes. A user study (N=20) with a working prototype shows that this concept is preferred by designers, compared to the previous manual procedure. Reflecting on our process and findings we discuss lessons learned for developing ML tools that respect practitioners’ needs and practices.


2021 ◽  
Vol 11 (13) ◽  
pp. 5826
Author(s):  
Evangelos Axiotis ◽  
Andreas Kontogiannis ◽  
Eleftherios Kalpoutzakis ◽  
George Giannakopoulos

Ethnopharmacology experts face several challenges when identifying and retrieving documents and resources related to their scientific focus. The volume of sources that need to be monitored, the variety of formats utilized, and the different quality of language use across sources present some of what we call “big data” challenges in the analysis of this data. This study aims to understand if and how experts can be supported effectively through intelligent tools in the task of ethnopharmacological literature research. To this end, we utilize a real case study of ethnopharmacology research aimed at the southern Balkans and the coastal zone of Asia Minor. Thus, we propose a methodology for more efficient research in ethnopharmacology. Our work follows an “expert–apprentice” paradigm in an automatic URL extraction process, through crawling, where the apprentice is a machine learning (ML) algorithm, utilizing a combination of active learning (AL) and reinforcement learning (RL), and the expert is the human researcher. ML-powered research improved the effectiveness and efficiency of the domain expert by 3.1 and 5.14 times, respectively, fetching a total number of 420 relevant ethnopharmacological documents in only 7 h versus an estimated 36 h of human-expert effort. Therefore, utilizing artificial intelligence (AI) tools to support the researcher can boost the efficiency and effectiveness of the identification and retrieval of appropriate documents.


Energies ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 1377
Author(s):  
Musaab I. Magzoub ◽  
Raj Kiran ◽  
Saeed Salehi ◽  
Ibnelwaleed A. Hussein ◽  
Mustafa S. Nasser

The traditional way to mitigate loss circulation in drilling operations is to use preventative and curative materials. However, it is difficult to quantify the amount of materials from every possible combination to produce customized rheological properties. In this study, machine learning (ML) is used to develop a framework to identify material composition for loss circulation applications based on the desired rheological characteristics. The relation between the rheological properties and the mud components for polyacrylamide/polyethyleneimine (PAM/PEI)-based mud is assessed experimentally. Four different ML algorithms were implemented to model the rheological data for various mud components at different concentrations and testing conditions. These four algorithms include (a) k-Nearest Neighbor, (b) Random Forest, (c) Gradient Boosting, and (d) AdaBoosting. The Gradient Boosting model showed the highest accuracy (91 and 74% for plastic and apparent viscosity, respectively), which can be further used for hydraulic calculations. Overall, the experimental study presented in this paper, together with the proposed ML-based framework, adds valuable information to the design of PAM/PEI-based mud. The ML models allowed a wide range of rheology assessments for various drilling fluid formulations with a mean accuracy of up to 91%. The case study has shown that with the appropriate combination of materials, reasonable rheological properties could be achieved to prevent loss circulation by managing the equivalent circulating density (ECD).


Sign in / Sign up

Export Citation Format

Share Document