Translation of large data bases for microcomputer-based application software: methodology and a case study

1989 ◽  
Vol 2 (3) ◽  
pp. 40-45 ◽  
Author(s):  
H.A. Smolleck ◽  
S.J. Ranade ◽  
B.E. Kindel ◽  
D. Malone ◽  
L.R. Kirk
1987 ◽  
Vol 7 (9) ◽  
pp. 18-27 ◽  
Author(s):  
Thomas Papathomas ◽  
James Schiavone ◽  
Bela Julesz
Keyword(s):  

Computer ◽  
1981 ◽  
Vol 14 (1) ◽  
pp. 53-53
Keyword(s):  

2021 ◽  
Author(s):  
Andrés Martínez

<p><strong>A METHODOLOGY FOR OPTIMIZING MODELING CONFIGURATION IN THE NUMERICAL MODELING OF OIL CONCENTRATIONS IN UNDERWATER BLOWOUTS: A NORTH SEA CASE STUDY</strong></p><p>Andrés Martínez<sup>a,*</sup>, Ana J. Abascal<sup>a</sup>, Andrés García<sup>a</sup>, Beatriz Pérez-Díaz<sup>a</sup>, Germán Aragón<sup>a</sup>, Raúl Medina<sup>a</sup></p><p><sup>a</sup>IHCantabria - Instituto de Hidráulica Ambiental de la Universidad de Cantabria, Avda. Isabel Torres, 15, 39011 Santander, Spain</p><p><sup>* </sup>Corresponding author: [email protected]</p><p>Underwater oil and gas blowouts are not easy to repair. It may take months before the well is finally capped, releasing large amounts of oil into the marine environment. In addition, persistent oils (crude oil, fuel oil, etc.) break up and dissipate slowly, so they often reach the shore before the cleanup is completed, affecting vasts extension of seas-oceans, just as posing a major threat to marine organisms.</p><p>On account of the above, numerical modeling of underwater blowouts demands great computing power. High-resolution, long-term data bases of wind-ocean currents are needed to be able to properly model the trajectory of the spill at both regional (open sea) and local level (coastline), just as to account for temporal variability. Moreover, a large number of particles, just as a high-resolution grid, are unavoidable in order to ensure accurate modeling of oil concentrations, of utmost importance in risk assessment, so that threshold concentrations can be established (threshold concentrations tell you what level of exposure to a compound could harm marine organisms).</p><p>In this study, an innovative methodology has been accomplished for the purpose of optimizing modeling configuration: number of particles and grid resolution, in the modeling of an underwater blowout, with a view to accurately represent oil concentrations, especially when threshold concentrations are considered. In doing so, statistical analyses (dimensionality reduction and clustering techniques), just as numerical modeling, have been applied.</p><p>It is composed of the following partial steps: (i) classification of i representative clusters of forcing patterns (based on PCA and K-means algorithms) from long-term wind-ocean current hindcast data bases, so that forcing variability in the study area is accounted for; (ii) definition of j modeling scenarios, based on key blowout parameters (oil type, flow rate, etc.) and modeling configuration (number of particles and grid resolution); (iii) Lagrangian trajectory modeling of the combination of the i clusters of forcing patterns and the j modeling scenarios; (iv) sensitivity analysis of the Lagrangian trajectory model output: oil concentrations,  to modeling configuration; (v) finally, as a result, the optimal modeling configuration, given a certain underwater blowout (its key parameters), is provided.</p><p>It has been applied to a hypothetical underwater blowout in the North Sea, one of the world’s most active seas in terms of offshore oil and gas exploration and production. A 5,000 cubic meter per day-flow rate oil spill, flowing from the well over a 15-day period, has been modeled (assuming a 31-day period of subsequent drift for a 46-day modeling). Moreover, threshold concentrations of 0.1, 0.25, 1 and 10 grams per square meter have been applied in the sensitivity analysis. The findings of this study stress the importance of modeling configuration in accurate modeling of oil concentrations, in particular if lower threshold concentrations are considered.</p>


2011 ◽  
Vol 16 (9) ◽  
pp. 1059-1067 ◽  
Author(s):  
Peter Horvath ◽  
Thomas Wild ◽  
Ulrike Kutay ◽  
Gabor Csucs

Imaging-based high-content screens often rely on single cell-based evaluation of phenotypes in large data sets of microscopic images. Traditionally, these screens are analyzed by extracting a few image-related parameters and use their ratios (linear single or multiparametric separation) to classify the cells into various phenotypic classes. In this study, the authors show how machine learning–based classification of individual cells outperforms those classical ratio-based techniques. Using fluorescent intensity and morphological and texture features, they evaluated how the performance of data analysis increases with increasing feature numbers. Their findings are based on a case study involving an siRNA screen monitoring nucleoplasmic and nucleolar accumulation of a fluorescently tagged reporter protein. For the analysis, they developed a complete analysis workflow incorporating image segmentation, feature extraction, cell classification, hit detection, and visualization of the results. For the classification task, the authors have established a new graphical framework, the Advanced Cell Classifier, which provides a very accurate high-content screen analysis with minimal user interaction, offering access to a variety of advanced machine learning methods.


Author(s):  
Célia Talma Gonçalves ◽  
Rui Camacho ◽  
Eugénio Oliveira

Whenever new sequences of DNA or proteins have been decoded it is almost compulsory to look at similar sequences and papers describing those sequences in order to both collect relevant information concerning the function and activity of the new sequences and/or know what is known already about similar sequences. In current web sites and data bases of sequences there are, usually, a set of curated paper references linked to each sequence. Those links are a good starting point to look for relevant information related to a set of sequences. One way to implement such approach is to do a blast with the new decoded sequences, and collect similar sequences. Then one looks at the papers linked with the similar sequences. Most often the number of retrieved papers is small and one has to search large data bases for relevant papers. This paper proposes a process of generating a classifier based on the initially set of relevant papers. First, the authors collect similar sequences using an alignment algorithm like Blast. Then, the authors use the enlarges set of papers to construct a classifier. Finally a classifier is used to automatically enlarge the set of relevant papers by searching the MEDLINE using the automatically constructed classifier.


Sign in / Sign up

Export Citation Format

Share Document