CAFDRS INNOVATIONS AND FIELD RESULTS

1973 ◽  
Vol 13 (1) ◽  
pp. 107
Author(s):  
Charles M. Edwards

The Computer Augmented Field Data Recording System-CAFDRS-is a unique combination of a standard seismic digital field recording system and a mini-computer system, specifically conceived to provide a major advance in monitoring and analysing data as it is being acquired in the field. On-line operations include capability to: composite, vertically stack, correlate VIBROSEIS data, control the recording operation and make power spectrum analysis. Off-line operations include the ability to play back data at a later time, and perform most of these same operations, plus, produce a normal-move-out corrected common-depth-point stacked section. Experience with CAFDRS in the field demonstrates both the versatility of the concept and the effectiveness of being able to analyse data "on the spot" particularly when using the VIBROSEIS technique. Noise, source array and geophone analyses are made "on- line" as the experimental program is conducted. By using the frequency analysis program to obtain power spectra, the optimum vibrator sweep can be established for the particular area.The 16 bit word base used in the mini-computer system preserves the 76db of dynamic range of the data acquired by the XDS-1010 or the DFS III, two of the standard field systems now used in CAFDRS. The dynamic range of the compositing and correlating operations accomplished with the SPC-16 mini-computers compares favorably with that obtained in a large scale data processing centre.The noise editing feature included in the double precision floating point compositing program reduces the deteriorating effect that large bursts of random noise have on most composited data. Composited data correlated in the field compares almost exactly with the same data correlated in the data centre. The NMO/CDP raw stacks produced in the field compare favorably with ones produced in the data centre.Note: VIBROSEIS is a trademark of the Continental Oil Company and DINOSEIS is a trademark of the Atlantic Richfield Oil Company.

Author(s):  
Simon Höllerer ◽  
Laetitia Papaxanthos ◽  
Anja Cathrin Gumpinger ◽  
Katrin Fischer ◽  
Christian Beisel ◽  
...  

AbstractPredicting quantitative effects of gene regulatory elements (GREs) on gene expression is a longstanding challenge in biology. Machine learning models for gene expression prediction may be able to address this challenge, but they require experimental datasets that link large numbers of GREs to their quantitative effect. However, current methods to generate such datasets experimentally are either restricted to specific applications or limited by their technical complexity and error-proneness. Here we introduce DNA-based phenotypic recording as a widely applicable and practical approach to generate very large datasets linking GREs to quantitative functional readouts of high precision, temporal resolution, and dynamic range, solely relying on sequencing. This is enabled by a novel DNA architecture comprising a site-specific recombinase, a GRE that controls recombinase expression, and a DNA substrate modifiable by the recombinase. Both GRE sequence and substrate state can be determined in a single sequencing read, and the frequency of modified substrates amongst constructs harbouring the same GRE is a quantitative, internally normalized readout of this GRE’s effect on recombinase expression. Using next-generation sequencing, the quantitative expression effect of extremely large GRE sets can be assessed in parallel. As a proof of principle, we apply this approach to record translation kinetics of more than 300,000 bacterial ribosome binding sites (RBSs), collecting over 2.7 million sequence-function pairs in a single experiment. Further, we generalize from these large-scale datasets by a novel deep learning approach that combines ensembling and uncertainty modelling to predict the function of untested RBSs with high accuracy, substantially outperforming state-of-the-art methods. The combination of DNA-based phenotypic recording and deep learning represents a major advance in our ability to predict quantitative function from genetic sequence.


Author(s):  
W.J. de Ruijter ◽  
Sharma Renu

Established methods for measurement of lattice spacings and angles of crystalline materials include x-ray diffraction, microdiffraction and HREM imaging. Structural information from HREM images is normally obtained off-line with the traveling table microscope or by the optical diffractogram technique. We present a new method for precise measurement of lattice vectors from HREM images using an on-line computer connected to the electron microscope. It has already been established that an image of crystalline material can be represented by a finite number of sinusoids. The amplitude and the phase of these sinusoids are affected by the microscope transfer characteristics, which are strongly influenced by the settings of defocus, astigmatism and beam alignment. However, the frequency of each sinusoid is solely a function of overall magnification and periodicities present in the specimen. After proper calibration of the overall magnification, lattice vectors can be measured unambiguously from HREM images.Measurement of lattice vectors is a statistical parameter estimation problem which is similar to amplitude, phase and frequency estimation of sinusoids in 1-dimensional signals as encountered, for example, in radar, sonar and telecommunications. It is important to properly model the observations, the systematic errors and the non-systematic errors. The observations are modelled as a sum of (2-dimensional) sinusoids. In the present study the components of the frequency vector of the sinusoids are the only parameters of interest. Non-systematic errors in recorded electron images are described as white Gaussian noise. The most important systematic error is geometric distortion. Lattice vectors are measured using a two step procedure. First a coarse search is obtained using a Fast Fourier Transform on an image section of interest. Prior to Fourier transformation the image section is multiplied with a window, which gradually falls off to zero at the edges. The user indicates interactively the periodicities of interest by selecting spots in the digital diffractogram. A fine search for each selected frequency is implemented using a bilinear interpolation, which is dependent on the window function. It is possible to refine the estimation even further using a non-linear least squares estimation. The first two steps provide the proper starting values for the numerical minimization (e.g. Gauss-Newton). This third step increases the precision with 30% to the highest theoretically attainable (Cramer and Rao Lower Bound). In the present studies we use a Gatan 622 TV camera attached to the JEM 4000EX electron microscope. Image analysis is implemented on a Micro VAX II computer equipped with a powerful array processor and real time image processing hardware. The typical precision, as defined by the standard deviation of the distribution of measurement errors, is found to be <0.003Å measured on single crystal silicon and <0.02Å measured on small (10-30Å) specimen areas. These values are ×10 times larger than predicted by theory. Furthermore, the measured precision is observed to be independent on signal-to-noise ratio (determined by the number of averaged TV frames). Obviously, the precision is restricted by geometric distortion mainly caused by the TV camera. For this reason, we are replacing the Gatan 622 TV camera with a modern high-grade CCD-based camera system. Such a system not only has negligible geometric distortion, but also high dynamic range (>10,000) and high resolution (1024x1024 pixels). The geometric distortion of the projector lenses can be measured, and corrected through re-sampling of the digitized image.


2018 ◽  
Author(s):  
Pavel Pokhilko ◽  
Evgeny Epifanovsky ◽  
Anna I. Krylov

Using single precision floating point representation reduces the size of data and computation time by a factor of two relative to double precision conventionally used in electronic structure programs. For large-scale calculations, such as those encountered in many-body theories, reduced memory footprint alleviates memory and input/output bottlenecks. Reduced size of data can lead to additional gains due to improved parallel performance on CPUs and various accelerators. However, using single precision can potentially reduce the accuracy of computed observables. Here we report an implementation of coupled-cluster and equation-of-motion coupled-cluster methods with single and double excitations in single precision. We consider both standard implementation and one using Cholesky decomposition or resolution-of-the-identity of electron-repulsion integrals. Numerical tests illustrate that when single precision is used in correlated calculations, the loss of accuracy is insignificant and pure single-precision implementation can be used for computing energies, analytic gradients, excited states, and molecular properties. In addition to pure single-precision calculations, our implementation allows one to follow a single-precision calculation by clean-up iterations, fully recovering double-precision results while retaining significant savings.


2020 ◽  
Vol 15 (7) ◽  
pp. 750-757
Author(s):  
Jihong Wang ◽  
Yue Shi ◽  
Xiaodan Wang ◽  
Huiyou Chang

Background: At present, using computer methods to predict drug-target interactions (DTIs) is a very important step in the discovery of new drugs and drug relocation processes. The potential DTIs identified by machine learning methods can provide guidance in biochemical or clinical experiments. Objective: The goal of this article is to combine the latest network representation learning methods for drug-target prediction research, improve model prediction capabilities, and promote new drug development. Methods: We use large-scale information network embedding (LINE) method to extract network topology features of drugs, targets, diseases, etc., integrate features obtained from heterogeneous networks, construct binary classification samples, and use random forest (RF) method to predict DTIs. Results: The experiments in this paper compare the common classifiers of RF, LR, and SVM, as well as the typical network representation learning methods of LINE, Node2Vec, and DeepWalk. It can be seen that the combined method LINE-RF achieves the best results, reaching an AUC of 0.9349 and an AUPR of 0.9016. Conclusion: The learning method based on LINE network can effectively learn drugs, targets, diseases and other hidden features from the network topology. The combination of features learned through multiple networks can enhance the expression ability. RF is an effective method of supervised learning. Therefore, the Line-RF combination method is a widely applicable method.


Geophysics ◽  
2008 ◽  
Vol 73 (2) ◽  
pp. S47-S61 ◽  
Author(s):  
Paul Sava ◽  
Oleg Poliannikov

The fidelity of depth seismic imaging depends on the accuracy of the velocity models used for wavefield reconstruction. Models can be decomposed in two components, corresponding to large-scale and small-scale variations. In practice, the large-scale velocity model component can be estimated with high accuracy using repeated migration/tomography cycles, but the small-scale component cannot. When the earth has significant small-scale velocity components, wavefield reconstruction does not completely describe the recorded data, and migrated images are perturbed by artifacts. There are two possible ways to address this problem: (1) improve wavefield reconstruction by estimating more accurate velocity models and image using conventional techniques (e.g., wavefield crosscorrelation) or (2) reconstruct wavefields with conventional methods using the known background velocity model but improve the imaging condition to alleviate the artifacts caused by the imprecise reconstruction. Wedescribe the unknown component of the velocity model as a random function with local spatial correlations. Imaging data perturbed by such random variations is characterized by statistical instability, i.e., various wavefield components image at wrong locations that depend on the actual realization of the random model. Statistical stability can be achieved by preprocessing the reconstructed wavefields prior to the imaging condition. We use Wigner distribution functions to attenuate the random noise present in the reconstructed wavefields, parameterized as a function of image coordinates. Wavefield filtering using Wigner distribution functions and conventional imaging can be lumped together into a new form of imaging condition that we call an interferometric imaging condition because of its similarity to concepts from recent work on interferometry. The interferometric imaging condition can be formulated both for zero-offset and for multioffset data, leading to robust, efficient imaging procedures that effectively attenuate imaging artifacts caused by unknown velocity models.


1968 ◽  
Vol 1 (6) ◽  
pp. 605-614 ◽  
Author(s):  
Robert E. Stenson ◽  
Linda Crouse ◽  
Walter L. Henry ◽  
Donald C. Harrison

Sign in / Sign up

Export Citation Format

Share Document