scholarly journals The Intermediate Data Structure (IDS) for Longitudinal Historical Microdata, version 4

2014 ◽  
Vol 1 ◽  
pp. 1-26
Author(s):  
George Alter ◽  
Kees Mandemakers

The Intermediate Data Structure (IDS) is a standard data format that has been adopted by several large longitudinal databases on historical populations. Since the publication of the first version in Historical Social Research in 2009, two improved and extended versions have been published in the Collaboratory Historical Life Courses. In this publication we present version 4 which is the latest ‘official’ standard of the IDS. Discussions with users over the last four years resulted in important changes, like the inclusion of a new table defining the hierarchical relationships among ‘contexts’, decision schemes for recording relationships, additional fields in the metadata table, rules for handling stillbirths, a reciprocal model for relationships, guidance for linking IDS data with geospatial information, and the introduction of an extended IDS for computed variables.

2018 ◽  
Vol 5 ◽  
pp. 1-2
Author(s):  
Paul Puschmann ◽  
Luciana Quaranta

Historical Life Course Studies, a journal in population studies, aims to stimulate and facilitate the implementation of IDS (Intermediate Data Structure, a standard data format for large historical databases), and to publish the results from (comparative) research with the help of large historical databases. The journal publishes not only empirical articles, but also descriptions (of the construction) of new and existing large historical databases, as well as articles dealing with database documentation, the transformation of existing databases into the IDS format, the development of algorithms and extraction software and all other issues related to the methodology of large historical databases.


2015 ◽  
Vol 2 ◽  
pp. 37-37
Author(s):  
Koen Matthijs ◽  
Paul Puschmann

Historical Life Course Studies, a journal in population studies, aims to stimulate and facilitate the implementation of IDS (Intermediate Data Structure, a standard data format for large historical databases), and to publish the results from (comparative) research with the help of large historical databases. The journal publishes not only empirical articles, but also descriptions (of the construction) of new and existing large historical databases, as well as articles dealing with database documentation, the transformation of existing databases into the IDS format, the development of algorithms and extraction software and all other issues related to the methodology of large historical databases.


2021 ◽  
Vol 10 ◽  
pp. 71-75
Author(s):  
George Alter

The Intermediate Data Structure (IDS) encourages sharing historical life course data by storing data in a common format. To encompass the complexity of life histories, IDS relies on data structures that are unfamiliar to most social scientists. This article examines four features of IDS that make it flexible and expandable: the Entity-Attribute-Value model, the relational database model, embedded metadata, and the Chronicle file. I also consider IDS from the perspective of current discussions about sharing data across scientific domains. We can find parallels to IDS in other fields that may lead to future innovations.


CivilEng ◽  
2021 ◽  
Vol 2 (1) ◽  
pp. 174-192
Author(s):  
Alcinia Zita Sampaio ◽  
Augusto Martins Gomes

The building information modelling (BIM) methodology supports collaborative works, based on the centralization of all information in a federated BIM model and on an efficient level of interoperability between BIM-based platforms. Concerning the structure design, the interoperability capacity of the most used software presents limitations that must be identified and alternative solutions must be proposed. This study analyzes the process of transfer of structure models between modeling and structure analysis tools. Distinct building cases were performed in order to recognize the type of limitations verified in the transfer processes concerning two-way data flow between several software. The study involves the modeling software ArchiCAD 2020, Revit 2020, and AECOsim 2019 and the structure analyzes tools SAP 2020, Robot 2020, and ETABS 22020. The transfer processes are realized in two ways: using the native data format; using a universal standard data transfer, the Industry Foundation Classes (IFC) format. The level of maturity of BIM in structure design is still relatively low, caused essentially by interoperability problems, but despite the limitations detected, this study shows throughout the development of several building case, that the methodology has clear advantages in the development of the structure project.


Sensors ◽  
2018 ◽  
Vol 18 (7) ◽  
pp. 2327 ◽  
Author(s):  
Jinsong Zhang ◽  
Wenjie Xing ◽  
Mengdao Xing ◽  
Guangcai Sun

In recent years, terahertz imaging systems and techniques have been developed and have gradually become a leading frontier field. With the advantages of low radiation and clothing-penetrable, terahertz imaging technology has been widely used for the detection of concealed weapons or other contraband carried on personnel at airports and other secure locations. This paper aims to detect these concealed items with deep learning method for its well detection performance and real-time detection speed. Based on the analysis of the characteristics of terahertz images, an effective detection system is proposed in this paper. First, a lots of terahertz images are collected and labeled as the standard data format. Secondly, this paper establishes the terahertz classification dataset and proposes a classification method based on transfer learning. Then considering the special distribution of terahertz image, an improved faster region-based convolutional neural network (Faster R-CNN) method based on threshold segmentation is proposed for detecting human body and other objects independently. Finally, experimental results demonstrate the effectiveness and efficiency of the proposed method for terahertz image detection.


2021 ◽  
Vol 10 ◽  
pp. 9-12
Author(s):  
Kris Inwood ◽  
Hamish Maxwell-Stewart

Kees Mandemakers has enriched historical databases in the Netherlands and internationally through the development of the Historical Sample of the Netherlands, the Intermediate Data Structure, a practical implementation of rule-based record linking (LINKS) and personal encouragement of high quality longitudinal data in a number of countries.


2003 ◽  
Vol 12 (2) ◽  
Author(s):  
R. L. Riddle ◽  
S. D. Kawaler

AbstractAs the WET moves to CCD systems, we move away from the uniformity of the standard WET photometer into an arena where each system can be radically different. There are many possible CCD photometry systems that can fulfil the requirements of a WET instrument, but each of these will have their own unique native data format. During XCov22, it became readily apparent that the WET requires a defined data format for all CCD data that arrives at HQ. This paper describes the proposed format for the next generation of WET data; the final version will be the default format for XQED, the new photometry package discussed elsewhere in these proceedings.


2015 ◽  
Vol 733 ◽  
pp. 867-870
Author(s):  
Zhen Zhong Jin ◽  
Zheng Huang ◽  
Hua Zhang

The suffix tree is a useful data structure constructed for indexing strings. However, when it comes to large datasets of discrete contents, most existing algorithms become very inefficient. Discrete datasets are need to be indexed in many fields like record analysis, data analyze in sensor network, association analysis etc. This paper presents an algorithm, STD, which stands for Suffix Tree for Discrete contents, that performs very efficiently with discrete input datasets. It imports several wonderful intermediate data structures for discrete strings; we also take care of the situation that the discrete input strings have similar characteristics. Moreover, STD keeps the advantages of existing implementations which are for successive input strings. Experiments were taken to evaluate the performance and shown that the method works well.


2012 ◽  
Vol 588-589 ◽  
pp. 785-789
Author(s):  
Jun Wang ◽  
Jing He ◽  
Xin Yu Xu

In this paper, design of a LDPC decoder in CMMB is presented. LDPC decoding algorithms for CMMB are analyzed and the optimal decoding algorithm-Normalized MSA are used to implement the decoder, and the algorithm is simulated to determine the design parameters. A partial parallel architecture based on Normalized MSA algorithm is proposed, and the architecture is simulated with a fixed-point model to determine the best quantification scheme for initial information and intermediate data format.


2012 ◽  
Vol 2012 ◽  
pp. 1-9
Author(s):  
Alejandro Cristo ◽  
David Valencia ◽  
Pablo J. Martínez ◽  
Rosa M. Pérez

Because of the availability of an overwhelming amount of remote sensing data obtained by different instruments, new techniques and applications have been developed in order to pursue the objective of detecting changes that occur in a particular area of the Earth or that affect a large part of the Earth. These studies have used datasets covering different wavelength ranges (visible, IR, radar, and so on), but common to all of them is the necessity for great accuracy to ensure that no bias is introduced due to data correction. Otherwise, a result may be the generation of false positives. Also, many studies have used several different datasets for the same area to detect changes (this is usually called data fusion), but there exists no specific data structure designed for this purpose. In this paper, we propose a data structure to be used for accurate change detection. This structure is transparent to the user and can be used for data fusion to improve those studies.


Sign in / Sign up

Export Citation Format

Share Document