Recommended standard for seismic (/radar) data files in the personal computer environment

Geophysics ◽  
1990 ◽  
Vol 55 (9) ◽  
pp. 1260-1271 ◽  
Author(s):  
S. E. Pullan

This paper is the result of the work of a subcommittee of SEG’s Engineering and Groundwater Geophysics Committee. It recommends a data file format for raw or processed shallow seismic or digital radar data in the small computer environment. It is recommended that this format be known as the SEG-2 format.

2020 ◽  
Author(s):  
A. E. Sullivan ◽  
S. J. Tappan ◽  
P. J. Angstman ◽  
A. Rodriguez ◽  
G. C. Thomas ◽  
...  

AbstractWith advances in microscopy and computer science, the technique of digitally reconstructing, modeling, and quantifying microscopic anatomies has become central to many fields of biological research. MBF Bioscience has chosen to openly document their digital reconstruction file format, Neuromorphological File Specification (4.0), available at www.mbfbioscience.com/filespecification (Angstman et al. 2020). One of such technologies, the format created and maintained by MBF Bioscience is broadly utilized by the neuroscience community. The data format’s structure and capabilities have evolved since its inception, with modifications made to keep pace with advancements in microscopy and the scientific questions raised by worldwide experts in the field. More recent modifications to the neuromorphological data format ensure it abides by the Findable, Accessible, Interoperable, and Reusable (FAIR) data standards promoted by the International Neuroinformatics Coordinating Facility (INCF; Wilkinson et al. 2016). The incorporated metadata make it easy to identify and repurpose these data types for downstream application and investigation. This publication describes key elements of the file format and details their relevant structural advantages in an effort to encourage the reuse of these rich data files for alternative analysis or reproduction of derived conclusions.


2019 ◽  
Vol 16 (9) ◽  
pp. 3824-3829
Author(s):  
Deepak Ahlawat ◽  
Deepali Gupta

Due to advancement in the technological world, there is a great surge in data. The main sources of generating such a large amount of data are social websites, internet sites etc. The large data files are combined together to create a big data architecture. Managing the data file in such a large volume is not easy. Therefore, modern techniques are developed to manage bulk data. To arrange and utilize such big data, Hadoop Distributed File System (HDFS) architecture from Hadoop was presented in the early stage of 2015. This architecture is used when traditional methods are insufficient to manage the data. In this paper, a novel clustering algorithm is implemented to manage a large amount of data. The concepts and frames of Big Data are studied. A novel algorithm is developed using the K means and cosine-based similarity clustering in this paper. The developed clustering algorithm is evaluated using the precision and recall parameters. The prominent results are obtained which successfully manages the big data issue.


2010 ◽  
Vol 20 (03n04) ◽  
pp. 63-76 ◽  
Author(s):  
AMANI N. TAHAT ◽  
WA'EL SALAH ◽  
AWNI B. HALLAK

This paper describes a shell which facilitates the use of the existing PIXE analysis software package PIXAN. In this work, we designed, wrote and examined several PIXE spectra in a utility program that is called WPASS. The WPASS program merely links PIXAN modules and makes their use more convenient than before. The WPASS program handles automatically PEAKFIT (BATTY) and THICK programs. It outputs the results into several files belonging to the same data file. These include converting data files from one-column-format OCF to PIXANPC format; control, graphics, and result files from PEAKFIT; control and result files from THICK; options for graphical plotting the results on the PC and converting the graphics files for their components for publications of the results. WPASS has new features that consider the secondary interelement fluorescence. WPASS has been used successfully for the analysis of PIXE spectra and inner-shell ionization studies.


1987 ◽  
Vol 14 (3) ◽  
pp. 181-181
Author(s):  
Debra Anne Horn ◽  
Mark R. McMinn

This article describes a BASIC program for the IBM Personal Computer (PC) that simulates a tachistoscope. The program utilizes data files created by the user or sample files created by the authors. The program is useful for classroom demonstration of classic experiments in cognitive psychology.


Big data is one of the most influential technologies of the modern era. However, in order to support maturity of big data systems, development and sustenance of heterogeneous environments is requires. This, in turn, requires integration of technologies as well as concepts. Computing and storage are the two core components of any big data system. With that said, big data storage needs to communicate with the execution engine and other processing and visualization technologies to create a comprehensive solution. This brings the facet of big data file formats into picture. This paper classifies available big data file formats into five categories namely text-based, row-based, column-based, in-memory and data storage services. It also compares the advantages, shortcomings and possible use cases of available big data file formats for Hadoop, which is the foundation for most big data computing technologies. Lastly, it provides a discussion on tradeoffs that must be considered while choosing a file format for a big data system, providing a framework for creation for file format selection criteria.


2019 ◽  
Vol 16 (69) ◽  
pp. 1-10
Author(s):  
Narjis Mezaal Shati ◽  
Ali Jassim Mohamed Ali

In the current study a steganography approach utilized to hide various data file format in wave files cover. Lest significant bit insertion (LSB) used to embedding a regular computer files (such as graphic, execution file (exe), sound, text, hyper text markup language (HTML) …etc) in a wave file with 2-bits hiding rates. The test results achieved good performance to hide any data file in wave file.


2021 ◽  
Author(s):  
Aimee Neeley ◽  
Stace E. Beaulieu ◽  
Chris Proctor ◽  
Ivona Cetinić ◽  
Joe Futrelle ◽  
...  

This technical manual guides the user through the process of creating a data table for the submission of taxonomic and morphological information for plankton and other particles from images to a repository. Guidance is provided to produce documentation that should accompany the submission of plankton and other particle data to a repository, describes data collection and processing techniques, and outlines the creation of a data file. Field names include scientificName that represents the lowest level taxonomic classification (e.g., genus if not certain of species, family if not certain of genus) and scientificNameID, the unique identifier from a reference database such as the World Register of Marine Species or AlgaeBase. The data table described here includes the field names associatedMedia, scientificName/ scientificNameID for both automated and manual identification, biovolume, area_cross_section, length_representation and width_representation. Additional steps that instruct the user on how to format their data for a submission to the Ocean Biodiversity Information System (OBIS) are also included. Examples of documentation and data files are provided for the user to follow. The documentation requirements and data table format are approved by both NASA’s SeaWiFS Bio-optical Archive and Storage System (SeaBASS) and the National Science Foundation’s Biological and Chemical Oceanography Data Management Office (BCO-DMO).


2020 ◽  
Vol 34 (01) ◽  
pp. 295-302
Author(s):  
Heng Zhang ◽  
Xiaofei Wang ◽  
Jiawen Chen ◽  
Chenyang Wang ◽  
Jianxin Li

With the proliferation of mobile device users, the Device-to-Device (D2D) communication has ascended to the spotlight in social network for users to share and exchange enormous data. Different from classic online social network (OSN) like Twitter and Facebook, each single data file to be shared in the D2D social network is often very large in data size, e.g., video, image or document. Sometimes, a small number of interesting data files may dominate the network traffic, and lead to heavy network congestion. To reduce the traffic congestion and design effective caching strategy, it is highly desirable to investigate how the data files are propagated in offline D2D social network and derive the diffusion model that fits to the new form of social network. However, existing works mainly concern about link prediction, which cannot predict the overall diffusion path when network topology is unknown. In this article, we propose D2D-LSTM based on Long Short-Term Memory (LSTM), which aims to predict complete content propagation paths in D2D social network. Taking the current user's time, geography and category preference into account, historical features of the previous path can be captured as well. It utilizes prototype users for prediction so as to achieve a better generalization ability. To the best of our knowledge, it is the first attempt to use real world large-scale dataset of mobile social network (MSN) to predict propagation path trees in a top-down order. Experimental results corroborate that the proposed algorithm can achieve superior prediction performance than state-of-the-art approaches. Furthermore, D2D-LSTM can achieve 95% average precision for terminal class and 17% accuracy for tree path hit.


Sign in / Sign up

Export Citation Format

Share Document